Jul 6 23:54:46.113698 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:54:46.113733 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:54:46.113747 kernel: BIOS-provided physical RAM map: Jul 6 23:54:46.113758 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 6 23:54:46.113767 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 6 23:54:46.113777 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jul 6 23:54:46.113789 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jul 6 23:54:46.113804 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jul 6 23:54:46.113815 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 6 23:54:46.113825 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 6 23:54:46.113836 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 6 23:54:46.113848 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 6 23:54:46.113859 kernel: printk: bootconsole [earlyser0] enabled Jul 6 23:54:46.113870 kernel: NX (Execute Disable) protection: active Jul 6 23:54:46.113889 kernel: APIC: Static calls initialized Jul 6 23:54:46.113902 kernel: efi: EFI v2.7 by Microsoft Jul 6 23:54:46.113925 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Jul 6 23:54:46.113944 kernel: SMBIOS 3.1.0 present. Jul 6 23:54:46.113957 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jul 6 23:54:46.113969 kernel: Hypervisor detected: Microsoft Hyper-V Jul 6 23:54:46.113982 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jul 6 23:54:46.113994 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Jul 6 23:54:46.114007 kernel: Hyper-V: Nested features: 0x1e0101 Jul 6 23:54:46.114020 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 6 23:54:46.114035 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 6 23:54:46.114048 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 6 23:54:46.114061 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 6 23:54:46.114075 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jul 6 23:54:46.114087 kernel: tsc: Detected 2593.904 MHz processor Jul 6 23:54:46.114101 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:54:46.114114 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:54:46.114127 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jul 6 23:54:46.114140 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 6 23:54:46.114171 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:54:46.114184 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jul 6 23:54:46.114196 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jul 6 23:54:46.114209 kernel: Using GB pages for direct mapping Jul 6 23:54:46.114222 kernel: Secure boot disabled Jul 6 23:54:46.114235 kernel: ACPI: Early table checksum verification disabled Jul 6 23:54:46.114248 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 6 23:54:46.114266 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:46.114282 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:46.114296 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 6 23:54:46.114309 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 6 23:54:46.114323 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:46.114337 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:46.114351 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:46.114367 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:46.114381 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:46.114394 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:46.114408 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:46.114422 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 6 23:54:46.114435 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jul 6 23:54:46.114450 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 6 23:54:46.114463 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 6 23:54:46.114479 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 6 23:54:46.114493 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 6 23:54:46.114506 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jul 6 23:54:46.114520 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jul 6 23:54:46.114534 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 6 23:54:46.114547 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jul 6 23:54:46.114561 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 6 23:54:46.114575 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 6 23:54:46.114588 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 6 23:54:46.114605 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jul 6 23:54:46.114619 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jul 6 23:54:46.114632 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 6 23:54:46.114646 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 6 23:54:46.114660 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 6 23:54:46.114674 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 6 23:54:46.114687 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 6 23:54:46.114701 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 6 23:54:46.114714 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 6 23:54:46.114731 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 6 23:54:46.114744 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 6 23:54:46.114758 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jul 6 23:54:46.114772 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jul 6 23:54:46.114785 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jul 6 23:54:46.114799 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jul 6 23:54:46.114813 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jul 6 23:54:46.114826 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jul 6 23:54:46.114840 kernel: Zone ranges: Jul 6 23:54:46.114856 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:54:46.114873 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 6 23:54:46.114887 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 6 23:54:46.114900 kernel: Movable zone start for each node Jul 6 23:54:46.114914 kernel: Early memory node ranges Jul 6 23:54:46.114928 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 6 23:54:46.114941 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jul 6 23:54:46.114955 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 6 23:54:46.114969 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 6 23:54:46.114985 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 6 23:54:46.114998 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:54:46.115012 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 6 23:54:46.115026 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jul 6 23:54:46.115039 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 6 23:54:46.115052 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jul 6 23:54:46.115066 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:54:46.115079 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:54:46.115093 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:54:46.115109 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 6 23:54:46.115123 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 6 23:54:46.115137 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 6 23:54:46.115157 kernel: Booting paravirtualized kernel on Hyper-V Jul 6 23:54:46.115171 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:54:46.115185 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 6 23:54:46.115199 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 6 23:54:46.115212 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 6 23:54:46.115226 kernel: pcpu-alloc: [0] 0 1 Jul 6 23:54:46.115242 kernel: Hyper-V: PV spinlocks enabled Jul 6 23:54:46.115255 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 6 23:54:46.115270 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:54:46.115285 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:54:46.115298 kernel: random: crng init done Jul 6 23:54:46.115311 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 6 23:54:46.115325 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:54:46.115339 kernel: Fallback order for Node 0: 0 Jul 6 23:54:46.115355 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jul 6 23:54:46.115380 kernel: Policy zone: Normal Jul 6 23:54:46.115397 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:54:46.115411 kernel: software IO TLB: area num 2. Jul 6 23:54:46.115426 kernel: Memory: 8077072K/8387460K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 310128K reserved, 0K cma-reserved) Jul 6 23:54:46.115441 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:54:46.115455 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:54:46.115470 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:54:46.115484 kernel: Dynamic Preempt: voluntary Jul 6 23:54:46.115499 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:54:46.115518 kernel: rcu: RCU event tracing is enabled. Jul 6 23:54:46.115535 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:54:46.115551 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:54:46.115564 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:54:46.115578 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:54:46.115596 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:54:46.115626 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:54:46.115655 kernel: Using NULL legacy PIC Jul 6 23:54:46.115669 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 6 23:54:46.115682 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:54:46.115696 kernel: Console: colour dummy device 80x25 Jul 6 23:54:46.115710 kernel: printk: console [tty1] enabled Jul 6 23:54:46.115725 kernel: printk: console [ttyS0] enabled Jul 6 23:54:46.115738 kernel: printk: bootconsole [earlyser0] disabled Jul 6 23:54:46.115749 kernel: ACPI: Core revision 20230628 Jul 6 23:54:46.115763 kernel: Failed to register legacy timer interrupt Jul 6 23:54:46.115781 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:54:46.115796 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 6 23:54:46.115809 kernel: Hyper-V: Using IPI hypercalls Jul 6 23:54:46.115820 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jul 6 23:54:46.115834 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jul 6 23:54:46.115848 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jul 6 23:54:46.115860 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jul 6 23:54:46.115878 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jul 6 23:54:46.115895 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jul 6 23:54:46.115911 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593904) Jul 6 23:54:46.115923 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 6 23:54:46.115936 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 6 23:54:46.115950 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:54:46.115962 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:54:46.115974 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:54:46.115987 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 6 23:54:46.116000 kernel: RETBleed: Vulnerable Jul 6 23:54:46.116014 kernel: Speculative Store Bypass: Vulnerable Jul 6 23:54:46.116031 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:54:46.116044 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:54:46.116057 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 6 23:54:46.116071 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:54:46.116086 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:54:46.116101 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:54:46.116116 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 6 23:54:46.116131 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 6 23:54:46.116146 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 6 23:54:46.116217 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:54:46.116231 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 6 23:54:46.116251 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 6 23:54:46.116266 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 6 23:54:46.116281 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jul 6 23:54:46.116296 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:54:46.116311 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:54:46.116326 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:54:46.116340 kernel: landlock: Up and running. Jul 6 23:54:46.116355 kernel: SELinux: Initializing. Jul 6 23:54:46.116370 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:54:46.116385 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:54:46.116401 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 6 23:54:46.116416 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:54:46.116433 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:54:46.116447 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:54:46.116460 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 6 23:54:46.116475 kernel: signal: max sigframe size: 3632 Jul 6 23:54:46.116489 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:54:46.116504 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:54:46.116518 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 6 23:54:46.116533 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:54:46.116547 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:54:46.116564 kernel: .... node #0, CPUs: #1 Jul 6 23:54:46.116579 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jul 6 23:54:46.116594 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 6 23:54:46.116608 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:54:46.116622 kernel: smpboot: Max logical packages: 1 Jul 6 23:54:46.116635 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Jul 6 23:54:46.116650 kernel: devtmpfs: initialized Jul 6 23:54:46.116664 kernel: x86/mm: Memory block size: 128MB Jul 6 23:54:46.116681 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 6 23:54:46.116695 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:54:46.116709 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:54:46.116724 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:54:46.116738 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:54:46.116752 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:54:46.116766 kernel: audit: type=2000 audit(1751846085.030:1): state=initialized audit_enabled=0 res=1 Jul 6 23:54:46.116779 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:54:46.116793 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:54:46.116810 kernel: cpuidle: using governor menu Jul 6 23:54:46.116824 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:54:46.116838 kernel: dca service started, version 1.12.1 Jul 6 23:54:46.116852 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jul 6 23:54:46.116866 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:54:46.116880 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:54:46.116894 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:54:46.116908 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:54:46.116922 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:54:46.116939 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:54:46.116953 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:54:46.116967 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:54:46.116981 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:54:46.116995 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:54:46.117009 kernel: ACPI: Interpreter enabled Jul 6 23:54:46.117023 kernel: ACPI: PM: (supports S0 S5) Jul 6 23:54:46.117037 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:54:46.117051 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:54:46.117068 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 6 23:54:46.117082 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 6 23:54:46.117096 kernel: iommu: Default domain type: Translated Jul 6 23:54:46.117110 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:54:46.117124 kernel: efivars: Registered efivars operations Jul 6 23:54:46.117138 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:54:46.117167 kernel: PCI: System does not support PCI Jul 6 23:54:46.117181 kernel: vgaarb: loaded Jul 6 23:54:46.117195 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jul 6 23:54:46.117212 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:54:46.117227 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:54:46.117240 kernel: pnp: PnP ACPI init Jul 6 23:54:46.117254 kernel: pnp: PnP ACPI: found 3 devices Jul 6 23:54:46.117269 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:54:46.117282 kernel: NET: Registered PF_INET protocol family Jul 6 23:54:46.117296 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 6 23:54:46.117310 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 6 23:54:46.117324 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:54:46.117340 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:54:46.117353 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 6 23:54:46.117367 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 6 23:54:46.117382 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 6 23:54:46.117396 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 6 23:54:46.117409 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:54:46.117437 kernel: NET: Registered PF_XDP protocol family Jul 6 23:54:46.117451 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:54:46.117464 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 6 23:54:46.117481 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jul 6 23:54:46.117496 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 6 23:54:46.117510 kernel: Initialise system trusted keyrings Jul 6 23:54:46.117528 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 6 23:54:46.117541 kernel: Key type asymmetric registered Jul 6 23:54:46.117555 kernel: Asymmetric key parser 'x509' registered Jul 6 23:54:46.117569 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:54:46.117585 kernel: io scheduler mq-deadline registered Jul 6 23:54:46.117600 kernel: io scheduler kyber registered Jul 6 23:54:46.117618 kernel: io scheduler bfq registered Jul 6 23:54:46.117633 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:54:46.117649 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:54:46.117665 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:54:46.117680 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 6 23:54:46.117694 kernel: i8042: PNP: No PS/2 controller found. Jul 6 23:54:46.117878 kernel: rtc_cmos 00:02: registered as rtc0 Jul 6 23:54:46.117998 kernel: rtc_cmos 00:02: setting system clock to 2025-07-06T23:54:45 UTC (1751846085) Jul 6 23:54:46.118112 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 6 23:54:46.118129 kernel: intel_pstate: CPU model not supported Jul 6 23:54:46.118143 kernel: efifb: probing for efifb Jul 6 23:54:46.118170 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 6 23:54:46.118184 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 6 23:54:46.118198 kernel: efifb: scrolling: redraw Jul 6 23:54:46.118213 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 6 23:54:46.118227 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:54:46.118244 kernel: fb0: EFI VGA frame buffer device Jul 6 23:54:46.118259 kernel: pstore: Using crash dump compression: deflate Jul 6 23:54:46.118273 kernel: pstore: Registered efi_pstore as persistent store backend Jul 6 23:54:46.118287 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:54:46.118301 kernel: Segment Routing with IPv6 Jul 6 23:54:46.118315 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:54:46.118329 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:54:46.118343 kernel: Key type dns_resolver registered Jul 6 23:54:46.118357 kernel: IPI shorthand broadcast: enabled Jul 6 23:54:46.118374 kernel: sched_clock: Marking stable (884002800, 46489500)->(1152240300, -221748000) Jul 6 23:54:46.118389 kernel: registered taskstats version 1 Jul 6 23:54:46.118403 kernel: Loading compiled-in X.509 certificates Jul 6 23:54:46.118417 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:54:46.118430 kernel: Key type .fscrypt registered Jul 6 23:54:46.118444 kernel: Key type fscrypt-provisioning registered Jul 6 23:54:46.118458 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:54:46.118472 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:54:46.118486 kernel: ima: No architecture policies found Jul 6 23:54:46.118503 kernel: clk: Disabling unused clocks Jul 6 23:54:46.118517 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:54:46.118531 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:54:46.118544 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:54:46.118559 kernel: Run /init as init process Jul 6 23:54:46.118578 kernel: with arguments: Jul 6 23:54:46.118594 kernel: /init Jul 6 23:54:46.118607 kernel: with environment: Jul 6 23:54:46.118621 kernel: HOME=/ Jul 6 23:54:46.118638 kernel: TERM=linux Jul 6 23:54:46.118655 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:54:46.118672 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:54:46.118690 systemd[1]: Detected virtualization microsoft. Jul 6 23:54:46.118706 systemd[1]: Detected architecture x86-64. Jul 6 23:54:46.118722 systemd[1]: Running in initrd. Jul 6 23:54:46.118737 systemd[1]: No hostname configured, using default hostname. Jul 6 23:54:46.118752 systemd[1]: Hostname set to . Jul 6 23:54:46.118772 systemd[1]: Initializing machine ID from random generator. Jul 6 23:54:46.118788 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:54:46.118804 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:54:46.118819 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:54:46.118837 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:54:46.118853 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:54:46.118868 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:54:46.118884 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:54:46.118905 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:54:46.118921 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:54:46.118937 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:54:46.118953 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:54:46.118969 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:54:46.118985 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:54:46.119000 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:54:46.119019 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:54:46.119035 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:54:46.119051 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:54:46.119067 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:54:46.119083 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:54:46.119099 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:54:46.119115 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:54:46.119131 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:54:46.119150 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:54:46.119182 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:54:46.119197 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:54:46.119212 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:54:46.119227 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:54:46.119241 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:54:46.119257 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:54:46.119302 systemd-journald[176]: Collecting audit messages is disabled. Jul 6 23:54:46.119340 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:46.119356 systemd-journald[176]: Journal started Jul 6 23:54:46.119386 systemd-journald[176]: Runtime Journal (/run/log/journal/9e1372ddbd7d4c27a82bb78061a81669) is 8.0M, max 158.8M, 150.8M free. Jul 6 23:54:46.115915 systemd-modules-load[177]: Inserted module 'overlay' Jul 6 23:54:46.131463 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:54:46.132225 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:54:46.147480 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:54:46.154532 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:54:46.163086 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:54:46.168188 kernel: Bridge firewalling registered Jul 6 23:54:46.169896 systemd-modules-load[177]: Inserted module 'br_netfilter' Jul 6 23:54:46.173232 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:54:46.179332 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:54:46.189333 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:54:46.195432 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:46.201533 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:54:46.204971 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:54:46.219372 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:54:46.226601 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:54:46.232986 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:54:46.246365 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:54:46.249519 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:54:46.261575 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:54:46.278248 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:54:46.290340 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:54:46.309243 systemd-resolved[205]: Positive Trust Anchors: Jul 6 23:54:46.313464 dracut-cmdline[214]: dracut-dracut-053 Jul 6 23:54:46.313464 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:54:46.309260 systemd-resolved[205]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:54:46.309318 systemd-resolved[205]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:54:46.326549 systemd-resolved[205]: Defaulting to hostname 'linux'. Jul 6 23:54:46.328329 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:54:46.356228 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:54:46.397183 kernel: SCSI subsystem initialized Jul 6 23:54:46.408179 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:54:46.419200 kernel: iscsi: registered transport (tcp) Jul 6 23:54:46.440630 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:54:46.440737 kernel: QLogic iSCSI HBA Driver Jul 6 23:54:46.477031 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:54:46.485427 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:54:46.513175 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:54:46.513250 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:54:46.517959 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:54:46.558196 kernel: raid6: avx512x4 gen() 18455 MB/s Jul 6 23:54:46.577171 kernel: raid6: avx512x2 gen() 18212 MB/s Jul 6 23:54:46.596165 kernel: raid6: avx512x1 gen() 18235 MB/s Jul 6 23:54:46.615165 kernel: raid6: avx2x4 gen() 18260 MB/s Jul 6 23:54:46.634171 kernel: raid6: avx2x2 gen() 18049 MB/s Jul 6 23:54:46.654478 kernel: raid6: avx2x1 gen() 13691 MB/s Jul 6 23:54:46.654535 kernel: raid6: using algorithm avx512x4 gen() 18455 MB/s Jul 6 23:54:46.675020 kernel: raid6: .... xor() 6957 MB/s, rmw enabled Jul 6 23:54:46.675059 kernel: raid6: using avx512x2 recovery algorithm Jul 6 23:54:46.698180 kernel: xor: automatically using best checksumming function avx Jul 6 23:54:46.845182 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:54:46.854715 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:54:46.861480 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:54:46.877760 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jul 6 23:54:46.882041 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:54:46.897337 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:54:46.910231 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jul 6 23:54:46.938305 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:54:46.946337 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:54:46.990503 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:54:47.025485 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:54:47.056948 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:54:47.073102 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:54:47.060618 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:54:47.066640 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:54:47.079495 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:54:47.091623 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:54:47.101755 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:54:47.105391 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:54:47.105430 kernel: AES CTR mode by8 optimization enabled Jul 6 23:54:47.101888 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:54:47.118468 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:54:47.132057 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:54:47.132282 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:47.135847 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:47.146055 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:47.156668 kernel: hv_vmbus: Vmbus version:5.2 Jul 6 23:54:47.159944 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:54:47.186176 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 6 23:54:47.197949 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 6 23:54:47.198339 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:47.203438 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:54:47.203578 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:47.211567 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:54:47.216970 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:47.234792 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 6 23:54:47.234822 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 6 23:54:47.234851 kernel: PTP clock support registered Jul 6 23:54:47.239536 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:47.242651 kernel: hv_vmbus: registering driver hv_storvsc Jul 6 23:54:47.249266 kernel: scsi host1: storvsc_host_t Jul 6 23:54:47.255121 kernel: scsi host0: storvsc_host_t Jul 6 23:54:47.255372 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 6 23:54:47.262403 kernel: hv_utils: Registering HyperV Utility Driver Jul 6 23:54:47.262445 kernel: hv_vmbus: registering driver hv_utils Jul 6 23:54:47.262460 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 6 23:54:47.262489 kernel: hv_utils: Heartbeat IC version 3.0 Jul 6 23:54:47.268301 kernel: hv_utils: Shutdown IC version 3.2 Jul 6 23:54:48.030041 kernel: hv_utils: TimeSync IC version 4.0 Jul 6 23:54:48.030076 kernel: hv_vmbus: registering driver hid_hyperv Jul 6 23:54:48.030205 systemd-resolved[205]: Clock change detected. Flushing caches. Jul 6 23:54:48.043138 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 6 23:54:48.046129 kernel: hv_vmbus: registering driver hv_netvsc Jul 6 23:54:48.046170 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 6 23:54:48.057100 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:48.073770 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 6 23:54:48.074097 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:54:48.077411 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:54:48.083253 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 6 23:54:48.101155 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 6 23:54:48.101372 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 6 23:54:48.101499 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 6 23:54:48.104149 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 6 23:54:48.109146 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 6 23:54:48.114754 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:54:48.118328 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:54:48.118353 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 6 23:54:48.246745 kernel: hv_netvsc 7c1e5235-f6fe-7c1e-5235-f6fe7c1e5235 eth0: VF slot 1 added Jul 6 23:54:48.256334 kernel: hv_vmbus: registering driver hv_pci Jul 6 23:54:48.256387 kernel: hv_pci 9fb8838b-4a28-481b-8ffd-6ac8be1a6596: PCI VMBus probing: Using version 0x10004 Jul 6 23:54:48.265305 kernel: hv_pci 9fb8838b-4a28-481b-8ffd-6ac8be1a6596: PCI host bridge to bus 4a28:00 Jul 6 23:54:48.265613 kernel: pci_bus 4a28:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jul 6 23:54:48.268480 kernel: pci_bus 4a28:00: No busn resource found for root bus, will use [bus 00-ff] Jul 6 23:54:48.273339 kernel: pci 4a28:00:02.0: [15b3:1016] type 00 class 0x020000 Jul 6 23:54:48.278139 kernel: pci 4a28:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 6 23:54:48.281269 kernel: pci 4a28:00:02.0: enabling Extended Tags Jul 6 23:54:48.294146 kernel: pci 4a28:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 4a28:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 6 23:54:48.299422 kernel: pci_bus 4a28:00: busn_res: [bus 00-ff] end is updated to 00 Jul 6 23:54:48.299733 kernel: pci 4a28:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 6 23:54:48.466641 kernel: mlx5_core 4a28:00:02.0: enabling device (0000 -> 0002) Jul 6 23:54:48.471145 kernel: mlx5_core 4a28:00:02.0: firmware version: 14.30.5000 Jul 6 23:54:48.660442 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 6 23:54:48.699493 kernel: hv_netvsc 7c1e5235-f6fe-7c1e-5235-f6fe7c1e5235 eth0: VF registering: eth1 Jul 6 23:54:48.699869 kernel: mlx5_core 4a28:00:02.0 eth1: joined to eth0 Jul 6 23:54:48.702098 kernel: mlx5_core 4a28:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 6 23:54:48.717761 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 6 23:54:48.729322 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (444) Jul 6 23:54:48.729352 kernel: mlx5_core 4a28:00:02.0 enP18984s1: renamed from eth1 Jul 6 23:54:48.740137 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (455) Jul 6 23:54:48.766724 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:54:48.776657 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 6 23:54:48.776788 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 6 23:54:48.791380 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:54:48.805174 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:54:48.813141 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:54:49.819148 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:54:49.820022 disk-uuid[603]: The operation has completed successfully. Jul 6 23:54:49.904869 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:54:49.904998 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:54:49.923482 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:54:49.929350 sh[689]: Success Jul 6 23:54:49.955184 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 6 23:54:50.137775 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:54:50.156252 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:54:50.170316 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:54:50.187271 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:54:50.187320 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:54:50.191693 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:54:50.194649 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:54:50.197166 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:54:50.459311 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:54:50.465058 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:54:50.480289 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:54:50.505737 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:54:50.519942 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:54:50.520005 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:54:50.522557 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:54:50.556214 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:54:50.565984 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:54:50.573163 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:54:50.580822 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:54:50.593340 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:54:50.614543 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:54:50.626290 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:54:50.648535 systemd-networkd[873]: lo: Link UP Jul 6 23:54:50.648544 systemd-networkd[873]: lo: Gained carrier Jul 6 23:54:50.650744 systemd-networkd[873]: Enumeration completed Jul 6 23:54:50.650831 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:54:50.652448 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:54:50.652453 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:54:50.653989 systemd[1]: Reached target network.target - Network. Jul 6 23:54:50.709145 kernel: mlx5_core 4a28:00:02.0 enP18984s1: Link up Jul 6 23:54:50.763146 kernel: hv_netvsc 7c1e5235-f6fe-7c1e-5235-f6fe7c1e5235 eth0: Data path switched to VF: enP18984s1 Jul 6 23:54:50.763586 systemd-networkd[873]: enP18984s1: Link UP Jul 6 23:54:50.763707 systemd-networkd[873]: eth0: Link UP Jul 6 23:54:50.763866 systemd-networkd[873]: eth0: Gained carrier Jul 6 23:54:50.763878 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:54:50.768366 systemd-networkd[873]: enP18984s1: Gained carrier Jul 6 23:54:50.840180 systemd-networkd[873]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 6 23:54:51.346530 ignition[840]: Ignition 2.19.0 Jul 6 23:54:51.346542 ignition[840]: Stage: fetch-offline Jul 6 23:54:51.346591 ignition[840]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:54:51.349734 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:54:51.346602 ignition[840]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:54:51.346725 ignition[840]: parsed url from cmdline: "" Jul 6 23:54:51.346730 ignition[840]: no config URL provided Jul 6 23:54:51.346736 ignition[840]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:54:51.346748 ignition[840]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:54:51.346756 ignition[840]: failed to fetch config: resource requires networking Jul 6 23:54:51.370374 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:54:51.346976 ignition[840]: Ignition finished successfully Jul 6 23:54:51.387514 ignition[882]: Ignition 2.19.0 Jul 6 23:54:51.387525 ignition[882]: Stage: fetch Jul 6 23:54:51.387773 ignition[882]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:54:51.387783 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:54:51.387875 ignition[882]: parsed url from cmdline: "" Jul 6 23:54:51.387878 ignition[882]: no config URL provided Jul 6 23:54:51.387882 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:54:51.387892 ignition[882]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:54:51.387914 ignition[882]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 6 23:54:51.469324 ignition[882]: GET result: OK Jul 6 23:54:51.469423 ignition[882]: config has been read from IMDS userdata Jul 6 23:54:51.469465 ignition[882]: parsing config with SHA512: 9c9529621c30345321cb8b073cf097b88ad1f8b179d7a08d2b465c52655811af79ac4c0d03b3c93599713868f5b3159457eb0ba27a2f336b5e46de05a8fe9042 Jul 6 23:54:51.474722 unknown[882]: fetched base config from "system" Jul 6 23:54:51.474898 unknown[882]: fetched base config from "system" Jul 6 23:54:51.475321 ignition[882]: fetch: fetch complete Jul 6 23:54:51.474907 unknown[882]: fetched user config from "azure" Jul 6 23:54:51.475325 ignition[882]: fetch: fetch passed Jul 6 23:54:51.477460 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:54:51.475374 ignition[882]: Ignition finished successfully Jul 6 23:54:51.514303 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:54:51.531450 ignition[888]: Ignition 2.19.0 Jul 6 23:54:51.531461 ignition[888]: Stage: kargs Jul 6 23:54:51.531699 ignition[888]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:54:51.531719 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:54:51.532622 ignition[888]: kargs: kargs passed Jul 6 23:54:51.532664 ignition[888]: Ignition finished successfully Jul 6 23:54:51.542445 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:54:51.557277 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:54:51.572098 ignition[894]: Ignition 2.19.0 Jul 6 23:54:51.572109 ignition[894]: Stage: disks Jul 6 23:54:51.574638 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:54:51.572377 ignition[894]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:54:51.572387 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:54:51.573236 ignition[894]: disks: disks passed Jul 6 23:54:51.573278 ignition[894]: Ignition finished successfully Jul 6 23:54:51.586041 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:54:51.595170 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:54:51.598321 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:54:51.603611 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:54:51.607868 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:54:51.622377 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:54:51.680650 systemd-fsck[902]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 6 23:54:51.691619 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:54:51.703218 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:54:51.801134 kernel: EXT4-fs (sda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:54:51.801682 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:54:51.804457 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:54:51.843235 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:54:51.848899 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:54:51.859281 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:54:51.884579 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (913) Jul 6 23:54:51.881085 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:54:51.881147 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:54:51.897415 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:54:51.912359 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:54:51.912394 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:54:51.912414 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:54:51.916308 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:54:51.922102 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:54:51.922906 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:54:52.448522 coreos-metadata[915]: Jul 06 23:54:52.448 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:54:52.455957 coreos-metadata[915]: Jul 06 23:54:52.455 INFO Fetch successful Jul 6 23:54:52.458752 coreos-metadata[915]: Jul 06 23:54:52.458 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:54:52.474475 coreos-metadata[915]: Jul 06 23:54:52.474 INFO Fetch successful Jul 6 23:54:52.490085 coreos-metadata[915]: Jul 06 23:54:52.489 INFO wrote hostname ci-4081.3.4-a-fe0535f741 to /sysroot/etc/hostname Jul 6 23:54:52.491951 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:54:52.540527 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:54:52.553284 systemd-networkd[873]: enP18984s1: Gained IPv6LL Jul 6 23:54:52.604201 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:54:52.627330 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:54:52.632935 initrd-setup-root[963]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:54:52.745294 systemd-networkd[873]: eth0: Gained IPv6LL Jul 6 23:54:53.454303 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:54:53.464248 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:54:53.475405 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:54:53.483170 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:54:53.491422 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:54:53.504883 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:54:53.515251 ignition[1031]: INFO : Ignition 2.19.0 Jul 6 23:54:53.515251 ignition[1031]: INFO : Stage: mount Jul 6 23:54:53.519180 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:54:53.519180 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:54:53.519180 ignition[1031]: INFO : mount: mount passed Jul 6 23:54:53.519180 ignition[1031]: INFO : Ignition finished successfully Jul 6 23:54:53.518345 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:54:53.537213 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:54:53.546073 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:54:53.567139 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1042) Jul 6 23:54:53.571135 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:54:53.571170 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:54:53.575626 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:54:53.581343 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:54:53.582808 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:54:53.605473 ignition[1058]: INFO : Ignition 2.19.0 Jul 6 23:54:53.605473 ignition[1058]: INFO : Stage: files Jul 6 23:54:53.610034 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:54:53.610034 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:54:53.610034 ignition[1058]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:54:53.622345 ignition[1058]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:54:53.622345 ignition[1058]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:54:53.687109 ignition[1058]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:54:53.691961 ignition[1058]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:54:53.691961 ignition[1058]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:54:53.687650 unknown[1058]: wrote ssh authorized keys file for user: core Jul 6 23:54:53.702396 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 6 23:54:53.707230 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 6 23:54:53.707230 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:54:53.707230 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 6 23:54:53.780071 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:54:54.070020 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:54:54.070020 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 6 23:54:54.893109 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:54:55.208365 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:54:55.208365 ignition[1058]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 6 23:54:55.266864 ignition[1058]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 6 23:54:55.276416 ignition[1058]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 6 23:54:55.276416 ignition[1058]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 6 23:54:55.276416 ignition[1058]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 6 23:54:55.276416 ignition[1058]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:54:55.276416 ignition[1058]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:54:55.276416 ignition[1058]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 6 23:54:55.276416 ignition[1058]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:54:55.276416 ignition[1058]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:54:55.276416 ignition[1058]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:54:55.276416 ignition[1058]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:54:55.276416 ignition[1058]: INFO : files: files passed Jul 6 23:54:55.276416 ignition[1058]: INFO : Ignition finished successfully Jul 6 23:54:55.270243 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:54:55.311089 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:54:55.321282 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:54:55.340015 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:54:55.340152 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:54:55.351818 initrd-setup-root-after-ignition[1086]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:54:55.351818 initrd-setup-root-after-ignition[1086]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:54:55.360046 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:54:55.357537 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:54:55.364414 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:54:55.381270 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:54:55.405416 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:54:55.405552 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:54:55.416858 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:54:55.421968 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:54:55.427334 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:54:55.434331 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:54:55.451903 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:54:55.461285 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:54:55.471792 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:54:55.477979 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:54:55.483992 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:54:55.484222 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:54:55.484371 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:54:55.485055 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:54:55.485925 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:54:55.486459 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:54:55.486876 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:54:55.487416 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:54:55.487845 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:54:55.488320 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:54:55.488758 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:54:55.489178 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:54:55.489585 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:54:55.489958 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:54:55.490090 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:54:55.490806 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:54:55.491658 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:54:55.492026 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:54:55.525002 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:54:55.571671 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:54:55.571856 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:54:55.580190 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:54:55.580385 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:54:55.589544 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:54:55.589688 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:54:55.598489 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:54:55.598659 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:54:55.612452 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:54:55.619385 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:54:55.624162 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:54:55.624348 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:54:55.627831 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:54:55.627978 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:54:55.635081 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:54:55.635231 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:54:55.660762 ignition[1111]: INFO : Ignition 2.19.0 Jul 6 23:54:55.660762 ignition[1111]: INFO : Stage: umount Jul 6 23:54:55.660762 ignition[1111]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:54:55.660762 ignition[1111]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:54:55.660762 ignition[1111]: INFO : umount: umount passed Jul 6 23:54:55.660762 ignition[1111]: INFO : Ignition finished successfully Jul 6 23:54:55.653396 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:54:55.653502 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:54:55.661595 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:54:55.661723 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:54:55.693180 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:54:55.693274 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:54:55.701615 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:54:55.701692 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:54:55.707419 systemd[1]: Stopped target network.target - Network. Jul 6 23:54:55.715442 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:54:55.715532 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:54:55.722159 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:54:55.730278 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:54:55.733540 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:54:55.741668 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:54:55.744334 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:54:55.749673 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:54:55.752342 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:54:55.760155 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:54:55.760216 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:54:55.765633 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:54:55.765703 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:54:55.771818 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:54:55.771880 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:54:55.785812 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:54:55.791172 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:54:55.797442 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:54:55.798021 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:54:55.798106 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:54:55.802178 systemd-networkd[873]: eth0: DHCPv6 lease lost Jul 6 23:54:55.812025 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:54:55.815088 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:54:55.822763 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:54:55.822844 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:54:55.828860 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:54:55.831883 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:54:55.842290 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:54:55.845741 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:54:55.845801 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:54:55.852888 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:54:55.853280 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:54:55.854017 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:54:55.862844 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:54:55.862897 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:54:55.869388 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:54:55.869441 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:54:55.876593 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:54:55.876650 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:54:55.886608 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:54:55.886742 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:54:55.904953 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:54:55.905033 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:54:55.913660 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:54:55.913708 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:54:55.941001 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:54:55.941077 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:54:55.948013 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:54:55.948064 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:54:55.952993 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:54:55.953045 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:54:55.974151 kernel: hv_netvsc 7c1e5235-f6fe-7c1e-5235-f6fe7c1e5235 eth0: Data path switched from VF: enP18984s1 Jul 6 23:54:55.977323 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:54:55.980722 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:54:55.980789 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:54:55.984778 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:54:55.984834 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:54:55.992333 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:54:55.992389 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:54:56.015658 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:54:56.015723 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:56.025950 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:54:56.026083 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:54:56.037286 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:54:56.037402 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:54:56.044620 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:54:56.057290 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:54:56.141844 systemd[1]: Switching root. Jul 6 23:54:56.169936 systemd-journald[176]: Journal stopped Jul 6 23:54:46.113698 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:54:46.113733 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:54:46.113747 kernel: BIOS-provided physical RAM map: Jul 6 23:54:46.113758 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 6 23:54:46.113767 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 6 23:54:46.113777 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jul 6 23:54:46.113789 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jul 6 23:54:46.113804 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jul 6 23:54:46.113815 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 6 23:54:46.113825 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 6 23:54:46.113836 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 6 23:54:46.113848 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 6 23:54:46.113859 kernel: printk: bootconsole [earlyser0] enabled Jul 6 23:54:46.113870 kernel: NX (Execute Disable) protection: active Jul 6 23:54:46.113889 kernel: APIC: Static calls initialized Jul 6 23:54:46.113902 kernel: efi: EFI v2.7 by Microsoft Jul 6 23:54:46.113925 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Jul 6 23:54:46.113944 kernel: SMBIOS 3.1.0 present. Jul 6 23:54:46.113957 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jul 6 23:54:46.113969 kernel: Hypervisor detected: Microsoft Hyper-V Jul 6 23:54:46.113982 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jul 6 23:54:46.113994 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Jul 6 23:54:46.114007 kernel: Hyper-V: Nested features: 0x1e0101 Jul 6 23:54:46.114020 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 6 23:54:46.114035 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 6 23:54:46.114048 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 6 23:54:46.114061 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 6 23:54:46.114075 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jul 6 23:54:46.114087 kernel: tsc: Detected 2593.904 MHz processor Jul 6 23:54:46.114101 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:54:46.114114 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:54:46.114127 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jul 6 23:54:46.114140 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 6 23:54:46.114171 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:54:46.114184 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jul 6 23:54:46.114196 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jul 6 23:54:46.114209 kernel: Using GB pages for direct mapping Jul 6 23:54:46.114222 kernel: Secure boot disabled Jul 6 23:54:46.114235 kernel: ACPI: Early table checksum verification disabled Jul 6 23:54:46.114248 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 6 23:54:46.114266 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:46.114282 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:46.114296 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 6 23:54:46.114309 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 6 23:54:46.114323 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:46.114337 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:46.114351 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:46.114367 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:46.114381 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:46.114394 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:46.114408 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:46.114422 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 6 23:54:46.114435 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jul 6 23:54:46.114450 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 6 23:54:46.114463 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 6 23:54:46.114479 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 6 23:54:46.114493 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 6 23:54:46.114506 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jul 6 23:54:46.114520 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jul 6 23:54:46.114534 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 6 23:54:46.114547 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jul 6 23:54:46.114561 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 6 23:54:46.114575 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 6 23:54:46.114588 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 6 23:54:46.114605 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jul 6 23:54:46.114619 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jul 6 23:54:46.114632 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 6 23:54:46.114646 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 6 23:54:46.114660 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 6 23:54:46.114674 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 6 23:54:46.114687 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 6 23:54:46.114701 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 6 23:54:46.114714 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 6 23:54:46.114731 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 6 23:54:46.114744 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 6 23:54:46.114758 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jul 6 23:54:46.114772 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jul 6 23:54:46.114785 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jul 6 23:54:46.114799 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jul 6 23:54:46.114813 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jul 6 23:54:46.114826 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jul 6 23:54:46.114840 kernel: Zone ranges: Jul 6 23:54:46.114856 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:54:46.114873 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 6 23:54:46.114887 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 6 23:54:46.114900 kernel: Movable zone start for each node Jul 6 23:54:46.114914 kernel: Early memory node ranges Jul 6 23:54:46.114928 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 6 23:54:46.114941 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jul 6 23:54:46.114955 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 6 23:54:46.114969 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 6 23:54:46.114985 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 6 23:54:46.114998 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:54:46.115012 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 6 23:54:46.115026 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jul 6 23:54:46.115039 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 6 23:54:46.115052 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jul 6 23:54:46.115066 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:54:46.115079 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:54:46.115093 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:54:46.115109 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 6 23:54:46.115123 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 6 23:54:46.115137 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 6 23:54:46.115157 kernel: Booting paravirtualized kernel on Hyper-V Jul 6 23:54:46.115171 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:54:46.115185 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 6 23:54:46.115199 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 6 23:54:46.115212 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 6 23:54:46.115226 kernel: pcpu-alloc: [0] 0 1 Jul 6 23:54:46.115242 kernel: Hyper-V: PV spinlocks enabled Jul 6 23:54:46.115255 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 6 23:54:46.115270 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:54:46.115285 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:54:46.115298 kernel: random: crng init done Jul 6 23:54:46.115311 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 6 23:54:46.115325 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:54:46.115339 kernel: Fallback order for Node 0: 0 Jul 6 23:54:46.115355 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jul 6 23:54:46.115380 kernel: Policy zone: Normal Jul 6 23:54:46.115397 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:54:46.115411 kernel: software IO TLB: area num 2. Jul 6 23:54:46.115426 kernel: Memory: 8077072K/8387460K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 310128K reserved, 0K cma-reserved) Jul 6 23:54:46.115441 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:54:46.115455 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:54:46.115470 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:54:46.115484 kernel: Dynamic Preempt: voluntary Jul 6 23:54:46.115499 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:54:46.115518 kernel: rcu: RCU event tracing is enabled. Jul 6 23:54:46.115535 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:54:46.115551 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:54:46.115564 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:54:46.115578 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:54:46.115596 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:54:46.115626 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:54:46.115655 kernel: Using NULL legacy PIC Jul 6 23:54:46.115669 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 6 23:54:46.115682 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:54:46.115696 kernel: Console: colour dummy device 80x25 Jul 6 23:54:46.115710 kernel: printk: console [tty1] enabled Jul 6 23:54:46.115725 kernel: printk: console [ttyS0] enabled Jul 6 23:54:46.115738 kernel: printk: bootconsole [earlyser0] disabled Jul 6 23:54:46.115749 kernel: ACPI: Core revision 20230628 Jul 6 23:54:46.115763 kernel: Failed to register legacy timer interrupt Jul 6 23:54:46.115781 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:54:46.115796 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 6 23:54:46.115809 kernel: Hyper-V: Using IPI hypercalls Jul 6 23:54:46.115820 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jul 6 23:54:46.115834 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jul 6 23:54:46.115848 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jul 6 23:54:46.115860 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jul 6 23:54:46.115878 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jul 6 23:54:46.115895 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jul 6 23:54:46.115911 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593904) Jul 6 23:54:46.115923 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 6 23:54:46.115936 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 6 23:54:46.115950 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:54:46.115962 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:54:46.115974 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:54:46.115987 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 6 23:54:46.116000 kernel: RETBleed: Vulnerable Jul 6 23:54:46.116014 kernel: Speculative Store Bypass: Vulnerable Jul 6 23:54:46.116031 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:54:46.116044 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:54:46.116057 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 6 23:54:46.116071 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:54:46.116086 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:54:46.116101 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:54:46.116116 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 6 23:54:46.116131 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 6 23:54:46.116146 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 6 23:54:46.116217 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:54:46.116231 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 6 23:54:46.116251 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 6 23:54:46.116266 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 6 23:54:46.116281 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jul 6 23:54:46.116296 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:54:46.116311 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:54:46.116326 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:54:46.116340 kernel: landlock: Up and running. Jul 6 23:54:46.116355 kernel: SELinux: Initializing. Jul 6 23:54:46.116370 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:54:46.116385 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:54:46.116401 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 6 23:54:46.116416 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:54:46.116433 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:54:46.116447 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:54:46.116460 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 6 23:54:46.116475 kernel: signal: max sigframe size: 3632 Jul 6 23:54:46.116489 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:54:46.116504 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:54:46.116518 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 6 23:54:46.116533 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:54:46.116547 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:54:46.116564 kernel: .... node #0, CPUs: #1 Jul 6 23:54:46.116579 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jul 6 23:54:46.116594 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 6 23:54:46.116608 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:54:46.116622 kernel: smpboot: Max logical packages: 1 Jul 6 23:54:46.116635 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Jul 6 23:54:46.116650 kernel: devtmpfs: initialized Jul 6 23:54:46.116664 kernel: x86/mm: Memory block size: 128MB Jul 6 23:54:46.116681 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 6 23:54:46.116695 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:54:46.116709 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:54:46.116724 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:54:46.116738 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:54:46.116752 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:54:46.116766 kernel: audit: type=2000 audit(1751846085.030:1): state=initialized audit_enabled=0 res=1 Jul 6 23:54:46.116779 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:54:46.116793 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:54:46.116810 kernel: cpuidle: using governor menu Jul 6 23:54:46.116824 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:54:46.116838 kernel: dca service started, version 1.12.1 Jul 6 23:54:46.116852 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jul 6 23:54:46.116866 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:54:46.116880 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:54:46.116894 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:54:46.116908 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:54:46.116922 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:54:46.116939 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:54:46.116953 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:54:46.116967 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:54:46.116981 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:54:46.116995 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:54:46.117009 kernel: ACPI: Interpreter enabled Jul 6 23:54:46.117023 kernel: ACPI: PM: (supports S0 S5) Jul 6 23:54:46.117037 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:54:46.117051 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:54:46.117068 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 6 23:54:46.117082 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 6 23:54:46.117096 kernel: iommu: Default domain type: Translated Jul 6 23:54:46.117110 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:54:46.117124 kernel: efivars: Registered efivars operations Jul 6 23:54:46.117138 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:54:46.117167 kernel: PCI: System does not support PCI Jul 6 23:54:46.117181 kernel: vgaarb: loaded Jul 6 23:54:46.117195 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jul 6 23:54:46.117212 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:54:46.117227 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:54:46.117240 kernel: pnp: PnP ACPI init Jul 6 23:54:46.117254 kernel: pnp: PnP ACPI: found 3 devices Jul 6 23:54:46.117269 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:54:46.117282 kernel: NET: Registered PF_INET protocol family Jul 6 23:54:46.117296 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 6 23:54:46.117310 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 6 23:54:46.117324 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:54:46.117340 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:54:46.117353 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 6 23:54:46.117367 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 6 23:54:46.117382 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 6 23:54:46.117396 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 6 23:54:46.117409 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:54:46.117437 kernel: NET: Registered PF_XDP protocol family Jul 6 23:54:46.117451 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:54:46.117464 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 6 23:54:46.117481 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jul 6 23:54:46.117496 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 6 23:54:46.117510 kernel: Initialise system trusted keyrings Jul 6 23:54:46.117528 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 6 23:54:46.117541 kernel: Key type asymmetric registered Jul 6 23:54:46.117555 kernel: Asymmetric key parser 'x509' registered Jul 6 23:54:46.117569 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:54:46.117585 kernel: io scheduler mq-deadline registered Jul 6 23:54:46.117600 kernel: io scheduler kyber registered Jul 6 23:54:46.117618 kernel: io scheduler bfq registered Jul 6 23:54:46.117633 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:54:46.117649 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:54:46.117665 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:54:46.117680 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 6 23:54:46.117694 kernel: i8042: PNP: No PS/2 controller found. Jul 6 23:54:46.117878 kernel: rtc_cmos 00:02: registered as rtc0 Jul 6 23:54:46.117998 kernel: rtc_cmos 00:02: setting system clock to 2025-07-06T23:54:45 UTC (1751846085) Jul 6 23:54:46.118112 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 6 23:54:46.118129 kernel: intel_pstate: CPU model not supported Jul 6 23:54:46.118143 kernel: efifb: probing for efifb Jul 6 23:54:46.118170 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 6 23:54:46.118184 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 6 23:54:46.118198 kernel: efifb: scrolling: redraw Jul 6 23:54:46.118213 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 6 23:54:46.118227 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:54:46.118244 kernel: fb0: EFI VGA frame buffer device Jul 6 23:54:46.118259 kernel: pstore: Using crash dump compression: deflate Jul 6 23:54:46.118273 kernel: pstore: Registered efi_pstore as persistent store backend Jul 6 23:54:46.118287 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:54:46.118301 kernel: Segment Routing with IPv6 Jul 6 23:54:46.118315 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:54:46.118329 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:54:46.118343 kernel: Key type dns_resolver registered Jul 6 23:54:46.118357 kernel: IPI shorthand broadcast: enabled Jul 6 23:54:46.118374 kernel: sched_clock: Marking stable (884002800, 46489500)->(1152240300, -221748000) Jul 6 23:54:46.118389 kernel: registered taskstats version 1 Jul 6 23:54:46.118403 kernel: Loading compiled-in X.509 certificates Jul 6 23:54:46.118417 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:54:46.118430 kernel: Key type .fscrypt registered Jul 6 23:54:46.118444 kernel: Key type fscrypt-provisioning registered Jul 6 23:54:46.118458 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:54:46.118472 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:54:46.118486 kernel: ima: No architecture policies found Jul 6 23:54:46.118503 kernel: clk: Disabling unused clocks Jul 6 23:54:46.118517 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:54:46.118531 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:54:46.118544 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:54:46.118559 kernel: Run /init as init process Jul 6 23:54:46.118578 kernel: with arguments: Jul 6 23:54:46.118594 kernel: /init Jul 6 23:54:46.118607 kernel: with environment: Jul 6 23:54:46.118621 kernel: HOME=/ Jul 6 23:54:46.118638 kernel: TERM=linux Jul 6 23:54:46.118655 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:54:46.118672 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:54:46.118690 systemd[1]: Detected virtualization microsoft. Jul 6 23:54:46.118706 systemd[1]: Detected architecture x86-64. Jul 6 23:54:46.118722 systemd[1]: Running in initrd. Jul 6 23:54:46.118737 systemd[1]: No hostname configured, using default hostname. Jul 6 23:54:46.118752 systemd[1]: Hostname set to . Jul 6 23:54:46.118772 systemd[1]: Initializing machine ID from random generator. Jul 6 23:54:46.118788 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:54:46.118804 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:54:46.118819 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:54:46.118837 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:54:46.118853 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:54:46.118868 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:54:46.118884 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:54:46.118905 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:54:46.118921 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:54:46.118937 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:54:46.118953 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:54:46.118969 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:54:46.118985 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:54:46.119000 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:54:46.119019 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:54:46.119035 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:54:46.119051 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:54:46.119067 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:54:46.119083 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:54:46.119099 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:54:46.119115 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:54:46.119131 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:54:46.119150 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:54:46.119182 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:54:46.119197 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:54:46.119212 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:54:46.119227 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:54:46.119241 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:54:46.119257 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:54:46.119302 systemd-journald[176]: Collecting audit messages is disabled. Jul 6 23:54:46.119340 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:46.119356 systemd-journald[176]: Journal started Jul 6 23:54:46.119386 systemd-journald[176]: Runtime Journal (/run/log/journal/9e1372ddbd7d4c27a82bb78061a81669) is 8.0M, max 158.8M, 150.8M free. Jul 6 23:54:46.115915 systemd-modules-load[177]: Inserted module 'overlay' Jul 6 23:54:46.131463 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:54:46.132225 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:54:46.147480 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:54:46.154532 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:54:46.163086 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:54:46.168188 kernel: Bridge firewalling registered Jul 6 23:54:46.169896 systemd-modules-load[177]: Inserted module 'br_netfilter' Jul 6 23:54:46.173232 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:54:46.179332 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:54:46.189333 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:54:46.195432 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:46.201533 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:54:46.204971 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:54:46.219372 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:54:46.226601 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:54:46.232986 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:54:46.246365 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:54:46.249519 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:54:46.261575 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:54:46.278248 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:54:46.290340 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:54:46.309243 systemd-resolved[205]: Positive Trust Anchors: Jul 6 23:54:46.313464 dracut-cmdline[214]: dracut-dracut-053 Jul 6 23:54:46.313464 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:54:46.309260 systemd-resolved[205]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:54:46.309318 systemd-resolved[205]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:54:46.326549 systemd-resolved[205]: Defaulting to hostname 'linux'. Jul 6 23:54:46.328329 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:54:46.356228 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:54:46.397183 kernel: SCSI subsystem initialized Jul 6 23:54:46.408179 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:54:46.419200 kernel: iscsi: registered transport (tcp) Jul 6 23:54:46.440630 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:54:46.440737 kernel: QLogic iSCSI HBA Driver Jul 6 23:54:46.477031 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:54:46.485427 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:54:46.513175 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:54:46.513250 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:54:46.517959 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:54:46.558196 kernel: raid6: avx512x4 gen() 18455 MB/s Jul 6 23:54:46.577171 kernel: raid6: avx512x2 gen() 18212 MB/s Jul 6 23:54:46.596165 kernel: raid6: avx512x1 gen() 18235 MB/s Jul 6 23:54:46.615165 kernel: raid6: avx2x4 gen() 18260 MB/s Jul 6 23:54:46.634171 kernel: raid6: avx2x2 gen() 18049 MB/s Jul 6 23:54:46.654478 kernel: raid6: avx2x1 gen() 13691 MB/s Jul 6 23:54:46.654535 kernel: raid6: using algorithm avx512x4 gen() 18455 MB/s Jul 6 23:54:46.675020 kernel: raid6: .... xor() 6957 MB/s, rmw enabled Jul 6 23:54:46.675059 kernel: raid6: using avx512x2 recovery algorithm Jul 6 23:54:46.698180 kernel: xor: automatically using best checksumming function avx Jul 6 23:54:46.845182 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:54:46.854715 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:54:46.861480 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:54:46.877760 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jul 6 23:54:46.882041 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:54:46.897337 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:54:46.910231 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jul 6 23:54:46.938305 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:54:46.946337 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:54:46.990503 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:54:47.025485 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:54:47.056948 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:54:47.073102 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:54:47.060618 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:54:47.066640 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:54:47.079495 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:54:47.091623 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:54:47.101755 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:54:47.105391 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:54:47.105430 kernel: AES CTR mode by8 optimization enabled Jul 6 23:54:47.101888 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:54:47.118468 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:54:47.132057 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:54:47.132282 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:47.135847 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:47.146055 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:47.156668 kernel: hv_vmbus: Vmbus version:5.2 Jul 6 23:54:47.159944 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:54:47.186176 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 6 23:54:47.197949 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 6 23:54:47.198339 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:47.203438 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:54:47.203578 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:47.211567 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:54:47.216970 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:47.234792 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 6 23:54:47.234822 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 6 23:54:47.234851 kernel: PTP clock support registered Jul 6 23:54:47.239536 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:47.242651 kernel: hv_vmbus: registering driver hv_storvsc Jul 6 23:54:47.249266 kernel: scsi host1: storvsc_host_t Jul 6 23:54:47.255121 kernel: scsi host0: storvsc_host_t Jul 6 23:54:47.255372 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 6 23:54:47.262403 kernel: hv_utils: Registering HyperV Utility Driver Jul 6 23:54:47.262445 kernel: hv_vmbus: registering driver hv_utils Jul 6 23:54:47.262460 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 6 23:54:47.262489 kernel: hv_utils: Heartbeat IC version 3.0 Jul 6 23:54:47.268301 kernel: hv_utils: Shutdown IC version 3.2 Jul 6 23:54:48.030041 kernel: hv_utils: TimeSync IC version 4.0 Jul 6 23:54:48.030076 kernel: hv_vmbus: registering driver hid_hyperv Jul 6 23:54:48.030205 systemd-resolved[205]: Clock change detected. Flushing caches. Jul 6 23:54:48.043138 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 6 23:54:48.046129 kernel: hv_vmbus: registering driver hv_netvsc Jul 6 23:54:48.046170 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 6 23:54:48.057100 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:48.073770 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 6 23:54:48.074097 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:54:48.077411 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:54:48.083253 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 6 23:54:48.101155 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 6 23:54:48.101372 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 6 23:54:48.101499 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 6 23:54:48.104149 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 6 23:54:48.109146 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 6 23:54:48.114754 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:54:48.118328 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:54:48.118353 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 6 23:54:48.246745 kernel: hv_netvsc 7c1e5235-f6fe-7c1e-5235-f6fe7c1e5235 eth0: VF slot 1 added Jul 6 23:54:48.256334 kernel: hv_vmbus: registering driver hv_pci Jul 6 23:54:48.256387 kernel: hv_pci 9fb8838b-4a28-481b-8ffd-6ac8be1a6596: PCI VMBus probing: Using version 0x10004 Jul 6 23:54:48.265305 kernel: hv_pci 9fb8838b-4a28-481b-8ffd-6ac8be1a6596: PCI host bridge to bus 4a28:00 Jul 6 23:54:48.265613 kernel: pci_bus 4a28:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jul 6 23:54:48.268480 kernel: pci_bus 4a28:00: No busn resource found for root bus, will use [bus 00-ff] Jul 6 23:54:48.273339 kernel: pci 4a28:00:02.0: [15b3:1016] type 00 class 0x020000 Jul 6 23:54:48.278139 kernel: pci 4a28:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 6 23:54:48.281269 kernel: pci 4a28:00:02.0: enabling Extended Tags Jul 6 23:54:48.294146 kernel: pci 4a28:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 4a28:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 6 23:54:48.299422 kernel: pci_bus 4a28:00: busn_res: [bus 00-ff] end is updated to 00 Jul 6 23:54:48.299733 kernel: pci 4a28:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 6 23:54:48.466641 kernel: mlx5_core 4a28:00:02.0: enabling device (0000 -> 0002) Jul 6 23:54:48.471145 kernel: mlx5_core 4a28:00:02.0: firmware version: 14.30.5000 Jul 6 23:54:48.660442 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 6 23:54:48.699493 kernel: hv_netvsc 7c1e5235-f6fe-7c1e-5235-f6fe7c1e5235 eth0: VF registering: eth1 Jul 6 23:54:48.699869 kernel: mlx5_core 4a28:00:02.0 eth1: joined to eth0 Jul 6 23:54:48.702098 kernel: mlx5_core 4a28:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 6 23:54:48.717761 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 6 23:54:48.729322 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (444) Jul 6 23:54:48.729352 kernel: mlx5_core 4a28:00:02.0 enP18984s1: renamed from eth1 Jul 6 23:54:48.740137 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (455) Jul 6 23:54:48.766724 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:54:48.776657 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 6 23:54:48.776788 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 6 23:54:48.791380 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:54:48.805174 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:54:48.813141 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:54:49.819148 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:54:49.820022 disk-uuid[603]: The operation has completed successfully. Jul 6 23:54:49.904869 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:54:49.904998 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:54:49.923482 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:54:49.929350 sh[689]: Success Jul 6 23:54:49.955184 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 6 23:54:50.137775 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:54:50.156252 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:54:50.170316 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:54:50.187271 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:54:50.187320 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:54:50.191693 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:54:50.194649 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:54:50.197166 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:54:50.459311 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:54:50.465058 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:54:50.480289 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:54:50.505737 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:54:50.519942 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:54:50.520005 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:54:50.522557 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:54:50.556214 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:54:50.565984 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:54:50.573163 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:54:50.580822 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:54:50.593340 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:54:50.614543 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:54:50.626290 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:54:50.648535 systemd-networkd[873]: lo: Link UP Jul 6 23:54:50.648544 systemd-networkd[873]: lo: Gained carrier Jul 6 23:54:50.650744 systemd-networkd[873]: Enumeration completed Jul 6 23:54:50.650831 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:54:50.652448 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:54:50.652453 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:54:50.653989 systemd[1]: Reached target network.target - Network. Jul 6 23:54:50.709145 kernel: mlx5_core 4a28:00:02.0 enP18984s1: Link up Jul 6 23:54:50.763146 kernel: hv_netvsc 7c1e5235-f6fe-7c1e-5235-f6fe7c1e5235 eth0: Data path switched to VF: enP18984s1 Jul 6 23:54:50.763586 systemd-networkd[873]: enP18984s1: Link UP Jul 6 23:54:50.763707 systemd-networkd[873]: eth0: Link UP Jul 6 23:54:50.763866 systemd-networkd[873]: eth0: Gained carrier Jul 6 23:54:50.763878 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:54:50.768366 systemd-networkd[873]: enP18984s1: Gained carrier Jul 6 23:54:50.840180 systemd-networkd[873]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 6 23:54:51.346530 ignition[840]: Ignition 2.19.0 Jul 6 23:54:51.346542 ignition[840]: Stage: fetch-offline Jul 6 23:54:51.346591 ignition[840]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:54:51.349734 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:54:51.346602 ignition[840]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:54:51.346725 ignition[840]: parsed url from cmdline: "" Jul 6 23:54:51.346730 ignition[840]: no config URL provided Jul 6 23:54:51.346736 ignition[840]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:54:51.346748 ignition[840]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:54:51.346756 ignition[840]: failed to fetch config: resource requires networking Jul 6 23:54:51.370374 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:54:51.346976 ignition[840]: Ignition finished successfully Jul 6 23:54:51.387514 ignition[882]: Ignition 2.19.0 Jul 6 23:54:51.387525 ignition[882]: Stage: fetch Jul 6 23:54:51.387773 ignition[882]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:54:51.387783 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:54:51.387875 ignition[882]: parsed url from cmdline: "" Jul 6 23:54:51.387878 ignition[882]: no config URL provided Jul 6 23:54:51.387882 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:54:51.387892 ignition[882]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:54:51.387914 ignition[882]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 6 23:54:51.469324 ignition[882]: GET result: OK Jul 6 23:54:51.469423 ignition[882]: config has been read from IMDS userdata Jul 6 23:54:51.469465 ignition[882]: parsing config with SHA512: 9c9529621c30345321cb8b073cf097b88ad1f8b179d7a08d2b465c52655811af79ac4c0d03b3c93599713868f5b3159457eb0ba27a2f336b5e46de05a8fe9042 Jul 6 23:54:51.474722 unknown[882]: fetched base config from "system" Jul 6 23:54:51.474898 unknown[882]: fetched base config from "system" Jul 6 23:54:51.475321 ignition[882]: fetch: fetch complete Jul 6 23:54:51.474907 unknown[882]: fetched user config from "azure" Jul 6 23:54:51.475325 ignition[882]: fetch: fetch passed Jul 6 23:54:51.477460 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:54:51.475374 ignition[882]: Ignition finished successfully Jul 6 23:54:51.514303 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:54:51.531450 ignition[888]: Ignition 2.19.0 Jul 6 23:54:51.531461 ignition[888]: Stage: kargs Jul 6 23:54:51.531699 ignition[888]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:54:51.531719 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:54:51.532622 ignition[888]: kargs: kargs passed Jul 6 23:54:51.532664 ignition[888]: Ignition finished successfully Jul 6 23:54:51.542445 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:54:51.557277 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:54:51.572098 ignition[894]: Ignition 2.19.0 Jul 6 23:54:51.572109 ignition[894]: Stage: disks Jul 6 23:54:51.574638 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:54:51.572377 ignition[894]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:54:51.572387 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:54:51.573236 ignition[894]: disks: disks passed Jul 6 23:54:51.573278 ignition[894]: Ignition finished successfully Jul 6 23:54:51.586041 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:54:51.595170 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:54:51.598321 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:54:51.603611 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:54:51.607868 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:54:51.622377 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:54:51.680650 systemd-fsck[902]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 6 23:54:51.691619 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:54:51.703218 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:54:51.801134 kernel: EXT4-fs (sda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:54:51.801682 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:54:51.804457 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:54:51.843235 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:54:51.848899 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:54:51.859281 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:54:51.884579 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (913) Jul 6 23:54:51.881085 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:54:51.881147 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:54:51.897415 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:54:51.912359 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:54:51.912394 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:54:51.912414 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:54:51.916308 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:54:51.922102 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:54:51.922906 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:54:52.448522 coreos-metadata[915]: Jul 06 23:54:52.448 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:54:52.455957 coreos-metadata[915]: Jul 06 23:54:52.455 INFO Fetch successful Jul 6 23:54:52.458752 coreos-metadata[915]: Jul 06 23:54:52.458 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:54:52.474475 coreos-metadata[915]: Jul 06 23:54:52.474 INFO Fetch successful Jul 6 23:54:52.490085 coreos-metadata[915]: Jul 06 23:54:52.489 INFO wrote hostname ci-4081.3.4-a-fe0535f741 to /sysroot/etc/hostname Jul 6 23:54:52.491951 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:54:52.540527 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:54:52.553284 systemd-networkd[873]: enP18984s1: Gained IPv6LL Jul 6 23:54:52.604201 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:54:52.627330 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:54:52.632935 initrd-setup-root[963]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:54:52.745294 systemd-networkd[873]: eth0: Gained IPv6LL Jul 6 23:54:53.454303 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:54:53.464248 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:54:53.475405 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:54:53.483170 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:54:53.491422 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:54:53.504883 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:54:53.515251 ignition[1031]: INFO : Ignition 2.19.0 Jul 6 23:54:53.515251 ignition[1031]: INFO : Stage: mount Jul 6 23:54:53.519180 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:54:53.519180 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:54:53.519180 ignition[1031]: INFO : mount: mount passed Jul 6 23:54:53.519180 ignition[1031]: INFO : Ignition finished successfully Jul 6 23:54:53.518345 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:54:53.537213 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:54:53.546073 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:54:53.567139 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1042) Jul 6 23:54:53.571135 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:54:53.571170 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:54:53.575626 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:54:53.581343 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:54:53.582808 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:54:53.605473 ignition[1058]: INFO : Ignition 2.19.0 Jul 6 23:54:53.605473 ignition[1058]: INFO : Stage: files Jul 6 23:54:53.610034 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:54:53.610034 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:54:53.610034 ignition[1058]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:54:53.622345 ignition[1058]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:54:53.622345 ignition[1058]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:54:53.687109 ignition[1058]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:54:53.691961 ignition[1058]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:54:53.691961 ignition[1058]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:54:53.687650 unknown[1058]: wrote ssh authorized keys file for user: core Jul 6 23:54:53.702396 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 6 23:54:53.707230 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 6 23:54:53.707230 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:54:53.707230 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 6 23:54:53.780071 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:54:54.070020 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:54:54.070020 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:54:54.080627 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 6 23:54:54.893109 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:54:55.208365 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:54:55.208365 ignition[1058]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 6 23:54:55.266864 ignition[1058]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 6 23:54:55.276416 ignition[1058]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 6 23:54:55.276416 ignition[1058]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 6 23:54:55.276416 ignition[1058]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 6 23:54:55.276416 ignition[1058]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:54:55.276416 ignition[1058]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:54:55.276416 ignition[1058]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 6 23:54:55.276416 ignition[1058]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:54:55.276416 ignition[1058]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:54:55.276416 ignition[1058]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:54:55.276416 ignition[1058]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:54:55.276416 ignition[1058]: INFO : files: files passed Jul 6 23:54:55.276416 ignition[1058]: INFO : Ignition finished successfully Jul 6 23:54:55.270243 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:54:55.311089 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:54:55.321282 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:54:55.340015 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:54:55.340152 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:54:55.351818 initrd-setup-root-after-ignition[1086]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:54:55.351818 initrd-setup-root-after-ignition[1086]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:54:55.360046 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:54:55.357537 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:54:55.364414 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:54:55.381270 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:54:55.405416 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:54:55.405552 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:54:55.416858 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:54:55.421968 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:54:55.427334 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:54:55.434331 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:54:55.451903 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:54:55.461285 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:54:55.471792 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:54:55.477979 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:54:55.483992 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:54:55.484222 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:54:55.484371 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:54:55.485055 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:54:55.485925 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:54:55.486459 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:54:55.486876 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:54:55.487416 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:54:55.487845 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:54:55.488320 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:54:55.488758 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:54:55.489178 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:54:55.489585 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:54:55.489958 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:54:55.490090 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:54:55.490806 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:54:55.491658 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:54:55.492026 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:54:55.525002 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:54:55.571671 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:54:55.571856 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:54:55.580190 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:54:55.580385 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:54:55.589544 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:54:55.589688 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:54:55.598489 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:54:55.598659 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:54:55.612452 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:54:55.619385 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:54:55.624162 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:54:55.624348 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:54:55.627831 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:54:55.627978 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:54:55.635081 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:54:55.635231 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:54:55.660762 ignition[1111]: INFO : Ignition 2.19.0 Jul 6 23:54:55.660762 ignition[1111]: INFO : Stage: umount Jul 6 23:54:55.660762 ignition[1111]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:54:55.660762 ignition[1111]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:54:55.660762 ignition[1111]: INFO : umount: umount passed Jul 6 23:54:55.660762 ignition[1111]: INFO : Ignition finished successfully Jul 6 23:54:55.653396 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:54:55.653502 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:54:55.661595 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:54:55.661723 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:54:55.693180 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:54:55.693274 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:54:55.701615 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:54:55.701692 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:54:55.707419 systemd[1]: Stopped target network.target - Network. Jul 6 23:54:55.715442 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:54:55.715532 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:54:55.722159 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:54:55.730278 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:54:55.733540 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:54:55.741668 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:54:55.744334 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:54:55.749673 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:54:55.752342 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:54:55.760155 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:54:55.760216 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:54:55.765633 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:54:55.765703 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:54:55.771818 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:54:55.771880 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:54:55.785812 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:54:55.791172 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:54:55.797442 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:54:55.798021 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:54:55.798106 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:54:55.802178 systemd-networkd[873]: eth0: DHCPv6 lease lost Jul 6 23:54:55.812025 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:54:55.815088 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:54:55.822763 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:54:55.822844 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:54:55.828860 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:54:55.831883 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:54:55.842290 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:54:55.845741 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:54:55.845801 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:54:55.852888 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:54:55.853280 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:54:55.854017 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:54:55.862844 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:54:55.862897 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:54:55.869388 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:54:55.869441 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:54:55.876593 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:54:55.876650 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:54:55.886608 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:54:55.886742 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:54:55.904953 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:54:55.905033 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:54:55.913660 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:54:55.913708 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:54:55.941001 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:54:55.941077 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:54:55.948013 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:54:55.948064 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:54:55.952993 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:54:55.953045 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:54:55.974151 kernel: hv_netvsc 7c1e5235-f6fe-7c1e-5235-f6fe7c1e5235 eth0: Data path switched from VF: enP18984s1 Jul 6 23:54:55.977323 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:54:55.980722 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:54:55.980789 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:54:55.984778 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:54:55.984834 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:54:55.992333 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:54:55.992389 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:54:56.015658 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:54:56.015723 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:56.025950 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:54:56.026083 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:54:56.037286 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:54:56.037402 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:54:56.044620 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:54:56.057290 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:54:56.141844 systemd[1]: Switching root. Jul 6 23:54:56.169936 systemd-journald[176]: Journal stopped Jul 6 23:55:01.773741 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Jul 6 23:55:01.773793 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:55:01.773809 kernel: SELinux: policy capability open_perms=1 Jul 6 23:55:01.773822 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:55:01.773834 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:55:01.773847 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:55:01.773863 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:55:01.773882 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:55:01.773895 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:55:01.773907 kernel: audit: type=1403 audit(1751846098.689:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:55:01.773921 systemd[1]: Successfully loaded SELinux policy in 150.354ms. Jul 6 23:55:01.773937 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.510ms. Jul 6 23:55:01.773953 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:55:01.773969 systemd[1]: Detected virtualization microsoft. Jul 6 23:55:01.773988 systemd[1]: Detected architecture x86-64. Jul 6 23:55:01.774003 systemd[1]: Detected first boot. Jul 6 23:55:01.774019 systemd[1]: Hostname set to . Jul 6 23:55:01.774527 systemd[1]: Initializing machine ID from random generator. Jul 6 23:55:01.774555 zram_generator::config[1171]: No configuration found. Jul 6 23:55:01.774578 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:55:01.774595 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:55:01.774610 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 6 23:55:01.774629 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:55:01.774645 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:55:01.774660 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:55:01.774677 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:55:01.774699 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:55:01.774715 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:55:01.774732 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:55:01.774749 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:55:01.774764 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:55:01.774782 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:55:01.774798 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:55:01.774817 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:55:01.774833 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:55:01.774848 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:55:01.774865 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:55:01.774883 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:55:01.774902 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:55:01.774922 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:55:01.774947 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:55:01.774963 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:55:01.774985 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:55:01.775003 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:55:01.775021 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:55:01.775039 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:55:01.775056 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:55:01.775076 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:55:01.775093 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:55:01.775150 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:55:01.775172 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:55:01.775190 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:55:01.775207 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:55:01.775225 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:55:01.775248 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:01.775267 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:55:01.775285 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:55:01.775299 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:55:01.775314 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:55:01.775329 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:55:01.775343 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:55:01.775359 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:55:01.775376 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:55:01.775392 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:55:01.775408 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:55:01.775424 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:55:01.775441 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:55:01.775457 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:55:01.775473 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 6 23:55:01.775490 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 6 23:55:01.775510 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:55:01.775526 kernel: loop: module loaded Jul 6 23:55:01.775543 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:55:01.775558 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:55:01.777173 systemd-journald[1284]: Collecting audit messages is disabled. Jul 6 23:55:01.777225 systemd-journald[1284]: Journal started Jul 6 23:55:01.777262 systemd-journald[1284]: Runtime Journal (/run/log/journal/a7ccaf8d8cc0407b9ca423a7037a2ae0) is 8.0M, max 158.8M, 150.8M free. Jul 6 23:55:01.785253 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:55:01.808157 kernel: fuse: init (API version 7.39) Jul 6 23:55:01.808235 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:55:01.828144 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:01.843195 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:55:01.844586 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:55:01.868616 kernel: ACPI: bus type drm_connector registered Jul 6 23:55:01.849465 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:55:01.853316 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:55:01.856111 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:55:01.860793 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:55:01.864098 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:55:01.869700 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:55:01.873756 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:55:01.877931 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:55:01.878481 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:55:01.884732 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:55:01.885143 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:55:01.888899 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:55:01.889399 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:55:01.893083 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:55:01.893567 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:55:01.897828 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:55:01.898238 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:55:01.901791 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:55:01.902235 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:55:01.906039 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:55:01.910088 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:55:01.914677 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:55:01.932149 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:55:01.943300 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:55:01.952288 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:55:01.956375 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:55:01.974354 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:55:01.986386 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:55:01.990012 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:55:01.991317 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:55:01.994592 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:55:01.997731 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:55:02.033392 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:55:02.040481 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:55:02.045622 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:55:02.049086 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:55:02.058342 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:55:02.067499 systemd-journald[1284]: Time spent on flushing to /var/log/journal/a7ccaf8d8cc0407b9ca423a7037a2ae0 is 21.088ms for 951 entries. Jul 6 23:55:02.067499 systemd-journald[1284]: System Journal (/var/log/journal/a7ccaf8d8cc0407b9ca423a7037a2ae0) is 8.0M, max 2.6G, 2.6G free. Jul 6 23:55:02.127539 systemd-journald[1284]: Received client request to flush runtime journal. Jul 6 23:55:02.075421 udevadm[1337]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 6 23:55:02.088724 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:55:02.093476 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:55:02.129855 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:55:02.137919 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:55:02.165749 systemd-tmpfiles[1331]: ACLs are not supported, ignoring. Jul 6 23:55:02.165777 systemd-tmpfiles[1331]: ACLs are not supported, ignoring. Jul 6 23:55:02.179088 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:55:02.187333 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:55:02.356834 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:55:02.366317 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:55:02.390090 systemd-tmpfiles[1352]: ACLs are not supported, ignoring. Jul 6 23:55:02.390131 systemd-tmpfiles[1352]: ACLs are not supported, ignoring. Jul 6 23:55:02.395100 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:55:03.572469 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:55:03.582329 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:55:03.604979 systemd-udevd[1358]: Using default interface naming scheme 'v255'. Jul 6 23:55:03.857959 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:55:03.869394 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:55:03.919358 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 6 23:55:03.946862 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:55:04.056418 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:55:04.060860 kernel: hv_vmbus: registering driver hyperv_fb Jul 6 23:55:04.067162 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 6 23:55:04.074987 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 6 23:55:04.077624 kernel: Console: switching to colour dummy device 80x25 Jul 6 23:55:04.082497 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:55:04.087155 kernel: hv_vmbus: registering driver hv_balloon Jul 6 23:55:04.090145 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:55:04.095155 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 6 23:55:04.225915 systemd-networkd[1364]: lo: Link UP Jul 6 23:55:04.225926 systemd-networkd[1364]: lo: Gained carrier Jul 6 23:55:04.228277 systemd-networkd[1364]: Enumeration completed Jul 6 23:55:04.228440 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:55:04.235189 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:04.235198 systemd-networkd[1364]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:55:04.245320 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:55:04.262440 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:04.298981 kernel: mlx5_core 4a28:00:02.0 enP18984s1: Link up Jul 6 23:55:04.298674 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:55:04.298990 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:04.320387 kernel: hv_netvsc 7c1e5235-f6fe-7c1e-5235-f6fe7c1e5235 eth0: Data path switched to VF: enP18984s1 Jul 6 23:55:04.320881 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:04.324406 systemd-networkd[1364]: enP18984s1: Link UP Jul 6 23:55:04.324562 systemd-networkd[1364]: eth0: Link UP Jul 6 23:55:04.324568 systemd-networkd[1364]: eth0: Gained carrier Jul 6 23:55:04.324594 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:04.329487 systemd-networkd[1364]: enP18984s1: Gained carrier Jul 6 23:55:04.347009 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:55:04.347323 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:04.360264 systemd-networkd[1364]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 6 23:55:04.360868 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:04.442181 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1366) Jul 6 23:55:04.557282 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:55:04.561334 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jul 6 23:55:04.653914 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:55:04.660579 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:55:04.718524 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:55:04.753525 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:55:04.757673 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:55:04.765415 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:55:04.771815 lvm[1453]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:55:04.831473 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:55:04.831905 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:55:04.833307 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:55:04.833331 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:55:04.833776 systemd[1]: Reached target machines.target - Containers. Jul 6 23:55:04.835804 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 6 23:55:04.851589 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:55:04.856276 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:55:04.859395 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:55:04.863871 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:55:04.868571 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 6 23:55:04.881312 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:55:04.886449 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:04.890403 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:55:04.905135 kernel: loop0: detected capacity change from 0 to 31056 Jul 6 23:55:04.979480 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:55:05.003915 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:55:05.004942 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 6 23:55:05.342207 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:55:05.398145 kernel: loop1: detected capacity change from 0 to 140768 Jul 6 23:55:05.801232 systemd-networkd[1364]: enP18984s1: Gained IPv6LL Jul 6 23:55:05.808145 kernel: loop2: detected capacity change from 0 to 221472 Jul 6 23:55:05.892141 kernel: loop3: detected capacity change from 0 to 142488 Jul 6 23:55:06.249398 systemd-networkd[1364]: eth0: Gained IPv6LL Jul 6 23:55:06.252379 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:55:06.374141 kernel: loop4: detected capacity change from 0 to 31056 Jul 6 23:55:06.381134 kernel: loop5: detected capacity change from 0 to 140768 Jul 6 23:55:06.392148 kernel: loop6: detected capacity change from 0 to 221472 Jul 6 23:55:06.399140 kernel: loop7: detected capacity change from 0 to 142488 Jul 6 23:55:06.406640 (sd-merge)[1480]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 6 23:55:06.407241 (sd-merge)[1480]: Merged extensions into '/usr'. Jul 6 23:55:06.415133 systemd[1]: Reloading requested from client PID 1464 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:55:06.415187 systemd[1]: Reloading... Jul 6 23:55:06.468166 zram_generator::config[1504]: No configuration found. Jul 6 23:55:06.634401 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:06.708670 systemd[1]: Reloading finished in 290 ms. Jul 6 23:55:06.727254 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:55:06.741269 systemd[1]: Starting ensure-sysext.service... Jul 6 23:55:06.746282 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:55:06.753403 systemd[1]: Reloading requested from client PID 1571 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:55:06.753425 systemd[1]: Reloading... Jul 6 23:55:06.774327 systemd-tmpfiles[1572]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:55:06.774829 systemd-tmpfiles[1572]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:55:06.775660 systemd-tmpfiles[1572]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:55:06.775966 systemd-tmpfiles[1572]: ACLs are not supported, ignoring. Jul 6 23:55:06.776053 systemd-tmpfiles[1572]: ACLs are not supported, ignoring. Jul 6 23:55:06.795177 systemd-tmpfiles[1572]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:55:06.795194 systemd-tmpfiles[1572]: Skipping /boot Jul 6 23:55:06.838693 systemd-tmpfiles[1572]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:55:06.838714 systemd-tmpfiles[1572]: Skipping /boot Jul 6 23:55:06.849142 zram_generator::config[1601]: No configuration found. Jul 6 23:55:07.024022 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:07.100945 systemd[1]: Reloading finished in 347 ms. Jul 6 23:55:07.121858 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:55:07.134505 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:07.139399 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:55:07.146423 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:55:07.149875 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:55:07.153539 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:55:07.159424 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:55:07.165281 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:55:07.170466 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:55:07.174252 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:55:07.182435 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:55:07.194244 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:55:07.203663 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:07.207400 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:55:07.207598 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:55:07.211285 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:55:07.211475 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:55:07.216948 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:55:07.217375 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:55:07.229692 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:07.230008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:55:07.234432 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:55:07.249472 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:55:07.259487 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:55:07.269677 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:55:07.269869 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:07.273425 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:55:07.276719 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:55:07.285009 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:55:07.286247 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:55:07.293481 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:55:07.295380 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:55:07.318859 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:07.320250 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:55:07.324442 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:55:07.342371 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:55:07.351245 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:55:07.357372 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:55:07.360919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:55:07.361511 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:55:07.364317 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:07.366607 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:55:07.372390 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:55:07.372611 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:55:07.376504 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:55:07.376720 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:55:07.382751 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:55:07.382923 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:55:07.386715 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:55:07.386932 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:55:07.415321 augenrules[1722]: No rules Jul 6 23:55:07.416688 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:55:07.424545 systemd[1]: Finished ensure-sysext.service. Jul 6 23:55:07.433896 systemd-resolved[1683]: Positive Trust Anchors: Jul 6 23:55:07.434631 systemd-resolved[1683]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:55:07.434696 systemd-resolved[1683]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:55:07.436887 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:55:07.436962 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:55:07.453056 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:55:07.466697 systemd-resolved[1683]: Using system hostname 'ci-4081.3.4-a-fe0535f741'. Jul 6 23:55:07.468642 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:55:07.471689 systemd[1]: Reached target network.target - Network. Jul 6 23:55:07.473997 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:55:07.476801 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:55:07.833263 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:55:07.837390 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:55:11.051031 ldconfig[1459]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:55:11.061463 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:55:11.069538 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:55:11.084091 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:55:11.088681 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:55:11.091683 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:55:11.094805 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:55:11.098705 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:55:11.101730 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:55:11.104971 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:55:11.108399 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:55:11.108459 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:55:11.110746 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:55:11.115620 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:55:11.120237 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:55:11.138152 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:55:11.141450 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:55:11.144313 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:55:11.146887 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:55:11.149566 systemd[1]: System is tainted: cgroupsv1 Jul 6 23:55:11.149625 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:55:11.149663 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:55:11.155221 systemd[1]: Starting chronyd.service - NTP client/server... Jul 6 23:55:11.161224 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:55:11.166325 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:55:11.177258 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:55:11.191296 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:55:11.197105 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:55:11.201954 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:55:11.202160 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jul 6 23:55:11.209343 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 6 23:55:11.219414 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 6 23:55:11.229967 (chronyd)[1748]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 6 23:55:11.230391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:11.231816 jq[1753]: false Jul 6 23:55:11.243299 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:55:11.247294 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:55:11.252416 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:55:11.257288 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:55:11.265299 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:55:11.280345 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:55:11.289038 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:55:11.291340 extend-filesystems[1756]: Found loop4 Jul 6 23:55:11.293326 extend-filesystems[1756]: Found loop5 Jul 6 23:55:11.293326 extend-filesystems[1756]: Found loop6 Jul 6 23:55:11.293326 extend-filesystems[1756]: Found loop7 Jul 6 23:55:11.293326 extend-filesystems[1756]: Found sda Jul 6 23:55:11.293326 extend-filesystems[1756]: Found sda1 Jul 6 23:55:11.293326 extend-filesystems[1756]: Found sda2 Jul 6 23:55:11.293326 extend-filesystems[1756]: Found sda3 Jul 6 23:55:11.293326 extend-filesystems[1756]: Found usr Jul 6 23:55:11.293326 extend-filesystems[1756]: Found sda4 Jul 6 23:55:11.293326 extend-filesystems[1756]: Found sda6 Jul 6 23:55:11.293326 extend-filesystems[1756]: Found sda7 Jul 6 23:55:11.293326 extend-filesystems[1756]: Found sda9 Jul 6 23:55:11.293326 extend-filesystems[1756]: Checking size of /dev/sda9 Jul 6 23:55:11.458202 kernel: hv_utils: KVP IC version 4.0 Jul 6 23:55:11.308390 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:55:11.311905 KVP[1757]: KVP starting; pid is:1757 Jul 6 23:55:11.469844 extend-filesystems[1756]: Old size kept for /dev/sda9 Jul 6 23:55:11.469844 extend-filesystems[1756]: Found sr0 Jul 6 23:55:11.329239 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:55:11.339151 KVP[1757]: KVP LIC Version: 3.1 Jul 6 23:55:11.365213 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:55:11.495847 update_engine[1775]: I20250706 23:55:11.415607 1775 main.cc:92] Flatcar Update Engine starting Jul 6 23:55:11.495847 update_engine[1775]: I20250706 23:55:11.469357 1775 update_check_scheduler.cc:74] Next update check in 3m31s Jul 6 23:55:11.343352 chronyd[1785]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 6 23:55:11.367232 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:55:11.509855 jq[1782]: true Jul 6 23:55:11.421306 chronyd[1785]: Timezone right/UTC failed leap second check, ignoring Jul 6 23:55:11.372613 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:55:11.421547 chronyd[1785]: Loaded seccomp filter (level 2) Jul 6 23:55:11.372911 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:55:11.431888 dbus-daemon[1751]: [system] SELinux support is enabled Jul 6 23:55:11.399190 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:55:11.399526 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:55:11.534563 jq[1802]: true Jul 6 23:55:11.414715 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:55:11.415041 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:55:11.434574 systemd[1]: Started chronyd.service - NTP client/server. Jul 6 23:55:11.446490 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:55:11.459552 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:55:11.497583 (ntainerd)[1804]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:55:11.510192 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:55:11.510241 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:55:11.515536 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:55:11.515561 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:55:11.526374 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:55:11.532354 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:55:11.541355 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:55:11.560602 tar[1798]: linux-amd64/helm Jul 6 23:55:11.633145 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1840) Jul 6 23:55:11.639730 coreos-metadata[1750]: Jul 06 23:55:11.639 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:55:11.646162 coreos-metadata[1750]: Jul 06 23:55:11.645 INFO Fetch successful Jul 6 23:55:11.646162 coreos-metadata[1750]: Jul 06 23:55:11.646 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 6 23:55:11.653686 coreos-metadata[1750]: Jul 06 23:55:11.652 INFO Fetch successful Jul 6 23:55:11.656031 coreos-metadata[1750]: Jul 06 23:55:11.654 INFO Fetching http://168.63.129.16/machine/d1d3f4fa-b3d7-45cd-97cc-108391f4299f/1d38cdbb%2D4ae7%2D48f1%2Dac3e%2D8a4386504541.%5Fci%2D4081.3.4%2Da%2Dfe0535f741?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 6 23:55:11.660150 bash[1843]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:55:11.684859 coreos-metadata[1750]: Jul 06 23:55:11.666 INFO Fetch successful Jul 6 23:55:11.684859 coreos-metadata[1750]: Jul 06 23:55:11.667 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:55:11.672906 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:55:11.689378 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:55:11.696460 systemd-logind[1768]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:55:11.697175 coreos-metadata[1750]: Jul 06 23:55:11.697 INFO Fetch successful Jul 6 23:55:11.728602 systemd-logind[1768]: New seat seat0. Jul 6 23:55:11.745479 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:55:11.793191 sshd_keygen[1786]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:55:11.801417 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:55:11.844960 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:55:11.867555 locksmithd[1824]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:55:11.892653 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:55:11.914807 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:55:11.924616 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 6 23:55:11.944991 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:55:11.946373 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:55:11.963419 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:55:12.000704 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:55:12.013327 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 6 23:55:12.037437 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:55:12.050421 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:55:12.065543 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:55:12.355060 tar[1798]: linux-amd64/LICENSE Jul 6 23:55:12.355237 tar[1798]: linux-amd64/README.md Jul 6 23:55:12.370376 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:55:12.702205 containerd[1804]: time="2025-07-06T23:55:12.701214900Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 6 23:55:12.736596 containerd[1804]: time="2025-07-06T23:55:12.736531500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:12.738165 containerd[1804]: time="2025-07-06T23:55:12.738110500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:12.738165 containerd[1804]: time="2025-07-06T23:55:12.738158300Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:55:12.738323 containerd[1804]: time="2025-07-06T23:55:12.738178200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:55:12.738398 containerd[1804]: time="2025-07-06T23:55:12.738373000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:55:12.738449 containerd[1804]: time="2025-07-06T23:55:12.738403600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:12.738511 containerd[1804]: time="2025-07-06T23:55:12.738489100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:12.738553 containerd[1804]: time="2025-07-06T23:55:12.738509400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:12.738775 containerd[1804]: time="2025-07-06T23:55:12.738750100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:12.738775 containerd[1804]: time="2025-07-06T23:55:12.738770900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:12.738878 containerd[1804]: time="2025-07-06T23:55:12.738790000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:12.738878 containerd[1804]: time="2025-07-06T23:55:12.738803400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:12.738950 containerd[1804]: time="2025-07-06T23:55:12.738906900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:12.739167 containerd[1804]: time="2025-07-06T23:55:12.739146600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:12.739993 containerd[1804]: time="2025-07-06T23:55:12.739629200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:12.739993 containerd[1804]: time="2025-07-06T23:55:12.739675500Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:55:12.739993 containerd[1804]: time="2025-07-06T23:55:12.739789800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:55:12.739993 containerd[1804]: time="2025-07-06T23:55:12.739851300Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:55:12.802534 containerd[1804]: time="2025-07-06T23:55:12.802362500Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:55:12.802534 containerd[1804]: time="2025-07-06T23:55:12.802456300Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:55:12.802534 containerd[1804]: time="2025-07-06T23:55:12.802480200Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:55:12.802534 containerd[1804]: time="2025-07-06T23:55:12.802502100Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:55:12.802534 containerd[1804]: time="2025-07-06T23:55:12.802522100Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:55:12.802826 containerd[1804]: time="2025-07-06T23:55:12.802732800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:55:12.803159 containerd[1804]: time="2025-07-06T23:55:12.803107800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:55:12.804727 containerd[1804]: time="2025-07-06T23:55:12.803320600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:55:12.804727 containerd[1804]: time="2025-07-06T23:55:12.803349500Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:55:12.804727 containerd[1804]: time="2025-07-06T23:55:12.803368700Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:55:12.804727 containerd[1804]: time="2025-07-06T23:55:12.803388400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:55:12.804727 containerd[1804]: time="2025-07-06T23:55:12.803408300Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:55:12.804727 containerd[1804]: time="2025-07-06T23:55:12.803426000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:55:12.804727 containerd[1804]: time="2025-07-06T23:55:12.803447400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:55:12.804727 containerd[1804]: time="2025-07-06T23:55:12.803468200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:55:12.804727 containerd[1804]: time="2025-07-06T23:55:12.803488200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:55:12.804727 containerd[1804]: time="2025-07-06T23:55:12.803507500Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:55:12.804727 containerd[1804]: time="2025-07-06T23:55:12.803528400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:55:12.804727 containerd[1804]: time="2025-07-06T23:55:12.803566200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:55:12.804727 containerd[1804]: time="2025-07-06T23:55:12.803589900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:55:12.804727 containerd[1804]: time="2025-07-06T23:55:12.803611100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:55:12.805290 containerd[1804]: time="2025-07-06T23:55:12.803633900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:55:12.805290 containerd[1804]: time="2025-07-06T23:55:12.803651900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:55:12.805290 containerd[1804]: time="2025-07-06T23:55:12.803679100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:55:12.805290 containerd[1804]: time="2025-07-06T23:55:12.803697700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:55:12.805290 containerd[1804]: time="2025-07-06T23:55:12.803716000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:55:12.805290 containerd[1804]: time="2025-07-06T23:55:12.803735100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:55:12.805290 containerd[1804]: time="2025-07-06T23:55:12.803755000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:55:12.805290 containerd[1804]: time="2025-07-06T23:55:12.803771700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:55:12.805290 containerd[1804]: time="2025-07-06T23:55:12.803789400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:55:12.805290 containerd[1804]: time="2025-07-06T23:55:12.803806200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:55:12.805290 containerd[1804]: time="2025-07-06T23:55:12.803831200Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:55:12.805290 containerd[1804]: time="2025-07-06T23:55:12.803863300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:55:12.805290 containerd[1804]: time="2025-07-06T23:55:12.803881900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:55:12.805290 containerd[1804]: time="2025-07-06T23:55:12.803897200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:55:12.805820 containerd[1804]: time="2025-07-06T23:55:12.803950200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:55:12.805820 containerd[1804]: time="2025-07-06T23:55:12.803974200Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:55:12.805820 containerd[1804]: time="2025-07-06T23:55:12.803989600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:55:12.805820 containerd[1804]: time="2025-07-06T23:55:12.804006700Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:55:12.805820 containerd[1804]: time="2025-07-06T23:55:12.804032600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:55:12.805820 containerd[1804]: time="2025-07-06T23:55:12.804050400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:55:12.805820 containerd[1804]: time="2025-07-06T23:55:12.804065300Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:55:12.805820 containerd[1804]: time="2025-07-06T23:55:12.804088300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:55:12.806141 containerd[1804]: time="2025-07-06T23:55:12.804491300Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:55:12.806141 containerd[1804]: time="2025-07-06T23:55:12.804585100Z" level=info msg="Connect containerd service" Jul 6 23:55:12.806141 containerd[1804]: time="2025-07-06T23:55:12.804647800Z" level=info msg="using legacy CRI server" Jul 6 23:55:12.806141 containerd[1804]: time="2025-07-06T23:55:12.804660900Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:55:12.806141 containerd[1804]: time="2025-07-06T23:55:12.804803400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:55:12.806141 containerd[1804]: time="2025-07-06T23:55:12.805545300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:55:12.811975 containerd[1804]: time="2025-07-06T23:55:12.807590400Z" level=info msg="Start subscribing containerd event" Jul 6 23:55:12.811975 containerd[1804]: time="2025-07-06T23:55:12.807653400Z" level=info msg="Start recovering state" Jul 6 23:55:12.811975 containerd[1804]: time="2025-07-06T23:55:12.807729000Z" level=info msg="Start event monitor" Jul 6 23:55:12.811975 containerd[1804]: time="2025-07-06T23:55:12.807746300Z" level=info msg="Start snapshots syncer" Jul 6 23:55:12.811975 containerd[1804]: time="2025-07-06T23:55:12.807758900Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:55:12.811975 containerd[1804]: time="2025-07-06T23:55:12.807770000Z" level=info msg="Start streaming server" Jul 6 23:55:12.811975 containerd[1804]: time="2025-07-06T23:55:12.807973300Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:55:12.811975 containerd[1804]: time="2025-07-06T23:55:12.808037900Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:55:12.811975 containerd[1804]: time="2025-07-06T23:55:12.811802700Z" level=info msg="containerd successfully booted in 0.112031s" Jul 6 23:55:12.808274 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:55:13.253298 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:13.257819 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:55:13.261393 systemd[1]: Startup finished in 831ms (firmware) + 26.785s (loader) + 13.107s (kernel) + 14.720s (userspace) = 55.444s. Jul 6 23:55:13.268356 (kubelet)[1937]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:55:13.526675 login[1915]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 6 23:55:13.529160 login[1916]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 6 23:55:13.542871 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:55:13.551730 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:55:13.556869 systemd-logind[1768]: New session 1 of user core. Jul 6 23:55:13.562403 systemd-logind[1768]: New session 2 of user core. Jul 6 23:55:13.589204 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:55:13.601425 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:55:13.606320 (systemd)[1951]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:55:13.813511 systemd[1951]: Queued start job for default target default.target. Jul 6 23:55:13.814169 systemd[1951]: Created slice app.slice - User Application Slice. Jul 6 23:55:13.814203 systemd[1951]: Reached target paths.target - Paths. Jul 6 23:55:13.814222 systemd[1951]: Reached target timers.target - Timers. Jul 6 23:55:13.819237 systemd[1951]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:55:13.831091 systemd[1951]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:55:13.831193 systemd[1951]: Reached target sockets.target - Sockets. Jul 6 23:55:13.831211 systemd[1951]: Reached target basic.target - Basic System. Jul 6 23:55:13.831266 systemd[1951]: Reached target default.target - Main User Target. Jul 6 23:55:13.831301 systemd[1951]: Startup finished in 216ms. Jul 6 23:55:13.832299 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:55:13.838487 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:55:13.840993 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:55:14.071350 waagent[1913]: 2025-07-06T23:55:14.071157Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jul 6 23:55:14.075692 waagent[1913]: 2025-07-06T23:55:14.074571Z INFO Daemon Daemon OS: flatcar 4081.3.4 Jul 6 23:55:14.077441 waagent[1913]: 2025-07-06T23:55:14.077364Z INFO Daemon Daemon Python: 3.11.9 Jul 6 23:55:14.079949 waagent[1913]: 2025-07-06T23:55:14.079866Z INFO Daemon Daemon Run daemon Jul 6 23:55:14.082368 waagent[1913]: 2025-07-06T23:55:14.082314Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.4' Jul 6 23:55:14.094278 waagent[1913]: 2025-07-06T23:55:14.082507Z INFO Daemon Daemon Using waagent for provisioning Jul 6 23:55:14.094278 waagent[1913]: 2025-07-06T23:55:14.083164Z INFO Daemon Daemon Activate resource disk Jul 6 23:55:14.094278 waagent[1913]: 2025-07-06T23:55:14.083830Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 6 23:55:14.094278 waagent[1913]: 2025-07-06T23:55:14.088232Z INFO Daemon Daemon Found device: None Jul 6 23:55:14.094278 waagent[1913]: 2025-07-06T23:55:14.088743Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 6 23:55:14.094278 waagent[1913]: 2025-07-06T23:55:14.089683Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 6 23:55:14.094278 waagent[1913]: 2025-07-06T23:55:14.092204Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 6 23:55:14.094278 waagent[1913]: 2025-07-06T23:55:14.093236Z INFO Daemon Daemon Running default provisioning handler Jul 6 23:55:14.117486 waagent[1913]: 2025-07-06T23:55:14.117400Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 6 23:55:14.121499 waagent[1913]: 2025-07-06T23:55:14.118359Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 6 23:55:14.121499 waagent[1913]: 2025-07-06T23:55:14.119146Z INFO Daemon Daemon cloud-init is enabled: False Jul 6 23:55:14.121499 waagent[1913]: 2025-07-06T23:55:14.119955Z INFO Daemon Daemon Copying ovf-env.xml Jul 6 23:55:14.156070 kubelet[1937]: E0706 23:55:14.156006 1937 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:55:14.157735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:55:14.157994 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:55:14.220145 waagent[1913]: 2025-07-06T23:55:14.216502Z INFO Daemon Daemon Successfully mounted dvd Jul 6 23:55:14.233958 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 6 23:55:14.235768 waagent[1913]: 2025-07-06T23:55:14.235693Z INFO Daemon Daemon Detect protocol endpoint Jul 6 23:55:14.252151 waagent[1913]: 2025-07-06T23:55:14.236261Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 6 23:55:14.252151 waagent[1913]: 2025-07-06T23:55:14.237317Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 6 23:55:14.252151 waagent[1913]: 2025-07-06T23:55:14.237738Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 6 23:55:14.252151 waagent[1913]: 2025-07-06T23:55:14.238736Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 6 23:55:14.252151 waagent[1913]: 2025-07-06T23:55:14.239060Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 6 23:55:14.296683 waagent[1913]: 2025-07-06T23:55:14.296620Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 6 23:55:14.304506 waagent[1913]: 2025-07-06T23:55:14.297189Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 6 23:55:14.304506 waagent[1913]: 2025-07-06T23:55:14.297935Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 6 23:55:14.446303 waagent[1913]: 2025-07-06T23:55:14.446104Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 6 23:55:14.449817 waagent[1913]: 2025-07-06T23:55:14.449744Z INFO Daemon Daemon Forcing an update of the goal state. Jul 6 23:55:14.456324 waagent[1913]: 2025-07-06T23:55:14.456270Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 6 23:55:14.472631 waagent[1913]: 2025-07-06T23:55:14.472573Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 6 23:55:14.487910 waagent[1913]: 2025-07-06T23:55:14.473258Z INFO Daemon Jul 6 23:55:14.487910 waagent[1913]: 2025-07-06T23:55:14.474309Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: cca930ee-af7e-4283-b1e5-8adced52b13b eTag: 15612574651253442380 source: Fabric] Jul 6 23:55:14.487910 waagent[1913]: 2025-07-06T23:55:14.474979Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 6 23:55:14.487910 waagent[1913]: 2025-07-06T23:55:14.476071Z INFO Daemon Jul 6 23:55:14.487910 waagent[1913]: 2025-07-06T23:55:14.476883Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 6 23:55:14.490878 waagent[1913]: 2025-07-06T23:55:14.490832Z INFO Daemon Daemon Downloading artifacts profile blob Jul 6 23:55:14.555609 waagent[1913]: 2025-07-06T23:55:14.555524Z INFO Daemon Downloaded certificate {'thumbprint': 'A5587BC6D41A4A8204BF18C5C7658083314DAD94', 'hasPrivateKey': True} Jul 6 23:55:14.561334 waagent[1913]: 2025-07-06T23:55:14.561273Z INFO Daemon Fetch goal state completed Jul 6 23:55:14.569437 waagent[1913]: 2025-07-06T23:55:14.569383Z INFO Daemon Daemon Starting provisioning Jul 6 23:55:14.573745 waagent[1913]: 2025-07-06T23:55:14.572099Z INFO Daemon Daemon Handle ovf-env.xml. Jul 6 23:55:14.573745 waagent[1913]: 2025-07-06T23:55:14.572734Z INFO Daemon Daemon Set hostname [ci-4081.3.4-a-fe0535f741] Jul 6 23:55:14.578778 waagent[1913]: 2025-07-06T23:55:14.578726Z INFO Daemon Daemon Publish hostname [ci-4081.3.4-a-fe0535f741] Jul 6 23:55:14.586163 waagent[1913]: 2025-07-06T23:55:14.579079Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 6 23:55:14.586163 waagent[1913]: 2025-07-06T23:55:14.579959Z INFO Daemon Daemon Primary interface is [eth0] Jul 6 23:55:14.603936 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:14.603948 systemd-networkd[1364]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:55:14.604001 systemd-networkd[1364]: eth0: DHCP lease lost Jul 6 23:55:14.605406 waagent[1913]: 2025-07-06T23:55:14.605295Z INFO Daemon Daemon Create user account if not exists Jul 6 23:55:14.622894 waagent[1913]: 2025-07-06T23:55:14.605680Z INFO Daemon Daemon User core already exists, skip useradd Jul 6 23:55:14.622894 waagent[1913]: 2025-07-06T23:55:14.606721Z INFO Daemon Daemon Configure sudoer Jul 6 23:55:14.622894 waagent[1913]: 2025-07-06T23:55:14.608137Z INFO Daemon Daemon Configure sshd Jul 6 23:55:14.622894 waagent[1913]: 2025-07-06T23:55:14.608848Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 6 23:55:14.622894 waagent[1913]: 2025-07-06T23:55:14.609771Z INFO Daemon Daemon Deploy ssh public key. Jul 6 23:55:14.622984 systemd-networkd[1364]: eth0: DHCPv6 lease lost Jul 6 23:55:14.648172 systemd-networkd[1364]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 6 23:55:15.735957 waagent[1913]: 2025-07-06T23:55:15.735882Z INFO Daemon Daemon Provisioning complete Jul 6 23:55:15.747136 waagent[1913]: 2025-07-06T23:55:15.747063Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 6 23:55:15.754686 waagent[1913]: 2025-07-06T23:55:15.747422Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 6 23:55:15.754686 waagent[1913]: 2025-07-06T23:55:15.747909Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jul 6 23:55:15.872784 waagent[2008]: 2025-07-06T23:55:15.872670Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jul 6 23:55:15.873264 waagent[2008]: 2025-07-06T23:55:15.872842Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.4 Jul 6 23:55:15.873264 waagent[2008]: 2025-07-06T23:55:15.872925Z INFO ExtHandler ExtHandler Python: 3.11.9 Jul 6 23:55:15.907271 waagent[2008]: 2025-07-06T23:55:15.907176Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 6 23:55:15.907497 waagent[2008]: 2025-07-06T23:55:15.907449Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:55:15.907592 waagent[2008]: 2025-07-06T23:55:15.907547Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:55:15.915639 waagent[2008]: 2025-07-06T23:55:15.915564Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 6 23:55:15.921310 waagent[2008]: 2025-07-06T23:55:15.921260Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 6 23:55:15.921762 waagent[2008]: 2025-07-06T23:55:15.921710Z INFO ExtHandler Jul 6 23:55:15.921838 waagent[2008]: 2025-07-06T23:55:15.921798Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 33604831-b143-4d33-8d6c-ce4036c19324 eTag: 15612574651253442380 source: Fabric] Jul 6 23:55:15.922157 waagent[2008]: 2025-07-06T23:55:15.922097Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 6 23:55:15.922717 waagent[2008]: 2025-07-06T23:55:15.922661Z INFO ExtHandler Jul 6 23:55:15.922781 waagent[2008]: 2025-07-06T23:55:15.922743Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 6 23:55:15.926764 waagent[2008]: 2025-07-06T23:55:15.926722Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 6 23:55:16.000589 waagent[2008]: 2025-07-06T23:55:16.000443Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A5587BC6D41A4A8204BF18C5C7658083314DAD94', 'hasPrivateKey': True} Jul 6 23:55:16.001079 waagent[2008]: 2025-07-06T23:55:16.001022Z INFO ExtHandler Fetch goal state completed Jul 6 23:55:16.014938 waagent[2008]: 2025-07-06T23:55:16.014870Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2008 Jul 6 23:55:16.015100 waagent[2008]: 2025-07-06T23:55:16.015052Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 6 23:55:16.016761 waagent[2008]: 2025-07-06T23:55:16.016703Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.4', '', 'Flatcar Container Linux by Kinvolk'] Jul 6 23:55:16.017125 waagent[2008]: 2025-07-06T23:55:16.017076Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 6 23:55:16.053377 waagent[2008]: 2025-07-06T23:55:16.053320Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 6 23:55:16.053651 waagent[2008]: 2025-07-06T23:55:16.053594Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 6 23:55:16.061434 waagent[2008]: 2025-07-06T23:55:16.061388Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 6 23:55:16.068816 systemd[1]: Reloading requested from client PID 2021 ('systemctl') (unit waagent.service)... Jul 6 23:55:16.068834 systemd[1]: Reloading... Jul 6 23:55:16.163159 zram_generator::config[2058]: No configuration found. Jul 6 23:55:16.278291 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:16.358813 systemd[1]: Reloading finished in 289 ms. Jul 6 23:55:16.381860 waagent[2008]: 2025-07-06T23:55:16.381406Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jul 6 23:55:16.389627 systemd[1]: Reloading requested from client PID 2117 ('systemctl') (unit waagent.service)... Jul 6 23:55:16.389644 systemd[1]: Reloading... Jul 6 23:55:16.448151 zram_generator::config[2147]: No configuration found. Jul 6 23:55:16.595171 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:16.676631 systemd[1]: Reloading finished in 286 ms. Jul 6 23:55:16.703606 waagent[2008]: 2025-07-06T23:55:16.702889Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 6 23:55:16.703606 waagent[2008]: 2025-07-06T23:55:16.703130Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 6 23:55:17.123608 waagent[2008]: 2025-07-06T23:55:17.123483Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 6 23:55:17.124481 waagent[2008]: 2025-07-06T23:55:17.124407Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 6 23:55:17.125467 waagent[2008]: 2025-07-06T23:55:17.125401Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 6 23:55:17.126089 waagent[2008]: 2025-07-06T23:55:17.126019Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 6 23:55:17.126350 waagent[2008]: 2025-07-06T23:55:17.126282Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:55:17.126652 waagent[2008]: 2025-07-06T23:55:17.126590Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 6 23:55:17.126839 waagent[2008]: 2025-07-06T23:55:17.126785Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:55:17.127211 waagent[2008]: 2025-07-06T23:55:17.127142Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 6 23:55:17.127443 waagent[2008]: 2025-07-06T23:55:17.127363Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 6 23:55:17.127590 waagent[2008]: 2025-07-06T23:55:17.127535Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:55:17.128051 waagent[2008]: 2025-07-06T23:55:17.127989Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:55:17.128292 waagent[2008]: 2025-07-06T23:55:17.128228Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 6 23:55:17.128777 waagent[2008]: 2025-07-06T23:55:17.128721Z INFO EnvHandler ExtHandler Configure routes Jul 6 23:55:17.128851 waagent[2008]: 2025-07-06T23:55:17.128795Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 6 23:55:17.128993 waagent[2008]: 2025-07-06T23:55:17.128936Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 6 23:55:17.129113 waagent[2008]: 2025-07-06T23:55:17.129061Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 6 23:55:17.129113 waagent[2008]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 6 23:55:17.129113 waagent[2008]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jul 6 23:55:17.129113 waagent[2008]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 6 23:55:17.129113 waagent[2008]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:55:17.129113 waagent[2008]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:55:17.129113 waagent[2008]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:55:17.130724 waagent[2008]: 2025-07-06T23:55:17.130681Z INFO EnvHandler ExtHandler Gateway:None Jul 6 23:55:17.132163 waagent[2008]: 2025-07-06T23:55:17.131110Z INFO EnvHandler ExtHandler Routes:None Jul 6 23:55:17.136906 waagent[2008]: 2025-07-06T23:55:17.136863Z INFO ExtHandler ExtHandler Jul 6 23:55:17.137025 waagent[2008]: 2025-07-06T23:55:17.136964Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 981ee9bf-cfbe-4217-8902-be30975279e4 correlation 84cf008d-3631-4330-820c-c77a53e4e219 created: 2025-07-06T23:54:06.436508Z] Jul 6 23:55:17.137827 waagent[2008]: 2025-07-06T23:55:17.137773Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 6 23:55:17.138526 waagent[2008]: 2025-07-06T23:55:17.138481Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jul 6 23:55:17.189295 waagent[2008]: 2025-07-06T23:55:17.189096Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: A56484A1-EB56-41CC-80DE-C9D4618C2F51;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jul 6 23:55:17.200426 waagent[2008]: 2025-07-06T23:55:17.200349Z INFO MonitorHandler ExtHandler Network interfaces: Jul 6 23:55:17.200426 waagent[2008]: Executing ['ip', '-a', '-o', 'link']: Jul 6 23:55:17.200426 waagent[2008]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 6 23:55:17.200426 waagent[2008]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:35:f6:fe brd ff:ff:ff:ff:ff:ff Jul 6 23:55:17.200426 waagent[2008]: 3: enP18984s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:35:f6:fe brd ff:ff:ff:ff:ff:ff\ altname enP18984p0s2 Jul 6 23:55:17.200426 waagent[2008]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 6 23:55:17.200426 waagent[2008]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 6 23:55:17.200426 waagent[2008]: 2: eth0 inet 10.200.8.39/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 6 23:55:17.200426 waagent[2008]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 6 23:55:17.200426 waagent[2008]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 6 23:55:17.200426 waagent[2008]: 2: eth0 inet6 fe80::7e1e:52ff:fe35:f6fe/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 6 23:55:17.200426 waagent[2008]: 3: enP18984s1 inet6 fe80::7e1e:52ff:fe35:f6fe/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 6 23:55:17.278215 waagent[2008]: 2025-07-06T23:55:17.278137Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jul 6 23:55:17.278215 waagent[2008]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:55:17.278215 waagent[2008]: pkts bytes target prot opt in out source destination Jul 6 23:55:17.278215 waagent[2008]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:55:17.278215 waagent[2008]: pkts bytes target prot opt in out source destination Jul 6 23:55:17.278215 waagent[2008]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:55:17.278215 waagent[2008]: pkts bytes target prot opt in out source destination Jul 6 23:55:17.278215 waagent[2008]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 6 23:55:17.278215 waagent[2008]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 6 23:55:17.278215 waagent[2008]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 6 23:55:17.281659 waagent[2008]: 2025-07-06T23:55:17.281600Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 6 23:55:17.281659 waagent[2008]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:55:17.281659 waagent[2008]: pkts bytes target prot opt in out source destination Jul 6 23:55:17.281659 waagent[2008]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:55:17.281659 waagent[2008]: pkts bytes target prot opt in out source destination Jul 6 23:55:17.281659 waagent[2008]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:55:17.281659 waagent[2008]: pkts bytes target prot opt in out source destination Jul 6 23:55:17.281659 waagent[2008]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 6 23:55:17.281659 waagent[2008]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 6 23:55:17.281659 waagent[2008]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 6 23:55:17.282045 waagent[2008]: 2025-07-06T23:55:17.281915Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 6 23:55:23.107157 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:55:23.112835 systemd[1]: Started sshd@0-10.200.8.39:22-10.200.16.10:57766.service - OpenSSH per-connection server daemon (10.200.16.10:57766). Jul 6 23:55:23.819860 sshd[2244]: Accepted publickey for core from 10.200.16.10 port 57766 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:55:23.821722 sshd[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:23.826229 systemd-logind[1768]: New session 3 of user core. Jul 6 23:55:23.834458 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:55:24.294064 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:55:24.302713 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:24.383754 systemd[1]: Started sshd@1-10.200.8.39:22-10.200.16.10:57770.service - OpenSSH per-connection server daemon (10.200.16.10:57770). Jul 6 23:55:24.436302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:24.440830 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:55:25.009778 sshd[2253]: Accepted publickey for core from 10.200.16.10 port 57770 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:55:25.011394 sshd[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:25.015823 systemd-logind[1768]: New session 4 of user core. Jul 6 23:55:25.025402 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:55:25.247858 kubelet[2263]: E0706 23:55:25.247798 2263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:55:25.251779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:55:25.252882 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:55:25.455403 sshd[2253]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:25.458863 systemd[1]: sshd@1-10.200.8.39:22-10.200.16.10:57770.service: Deactivated successfully. Jul 6 23:55:25.463645 systemd-logind[1768]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:55:25.464402 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:55:25.466434 systemd-logind[1768]: Removed session 4. Jul 6 23:55:25.562473 systemd[1]: Started sshd@2-10.200.8.39:22-10.200.16.10:57776.service - OpenSSH per-connection server daemon (10.200.16.10:57776). Jul 6 23:55:26.184295 sshd[2278]: Accepted publickey for core from 10.200.16.10 port 57776 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:55:26.186176 sshd[2278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:26.191924 systemd-logind[1768]: New session 5 of user core. Jul 6 23:55:26.198367 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:55:26.624197 sshd[2278]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:26.627962 systemd[1]: sshd@2-10.200.8.39:22-10.200.16.10:57776.service: Deactivated successfully. Jul 6 23:55:26.633954 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:55:26.634847 systemd-logind[1768]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:55:26.635742 systemd-logind[1768]: Removed session 5. Jul 6 23:55:26.741859 systemd[1]: Started sshd@3-10.200.8.39:22-10.200.16.10:57786.service - OpenSSH per-connection server daemon (10.200.16.10:57786). Jul 6 23:55:27.362400 sshd[2286]: Accepted publickey for core from 10.200.16.10 port 57786 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:55:27.364079 sshd[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:27.369338 systemd-logind[1768]: New session 6 of user core. Jul 6 23:55:27.376417 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:55:27.806542 sshd[2286]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:27.811586 systemd[1]: sshd@3-10.200.8.39:22-10.200.16.10:57786.service: Deactivated successfully. Jul 6 23:55:27.816701 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:55:27.817497 systemd-logind[1768]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:55:27.818479 systemd-logind[1768]: Removed session 6. Jul 6 23:55:27.919717 systemd[1]: Started sshd@4-10.200.8.39:22-10.200.16.10:57800.service - OpenSSH per-connection server daemon (10.200.16.10:57800). Jul 6 23:55:28.542883 sshd[2294]: Accepted publickey for core from 10.200.16.10 port 57800 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:55:28.544756 sshd[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:28.549437 systemd-logind[1768]: New session 7 of user core. Jul 6 23:55:28.559574 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:55:28.990923 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:55:28.991323 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:55:29.020787 sudo[2298]: pam_unix(sudo:session): session closed for user root Jul 6 23:55:29.133611 sshd[2294]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:29.137487 systemd[1]: sshd@4-10.200.8.39:22-10.200.16.10:57800.service: Deactivated successfully. Jul 6 23:55:29.142230 systemd-logind[1768]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:55:29.143497 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:55:29.145544 systemd-logind[1768]: Removed session 7. Jul 6 23:55:29.251446 systemd[1]: Started sshd@5-10.200.8.39:22-10.200.16.10:57806.service - OpenSSH per-connection server daemon (10.200.16.10:57806). Jul 6 23:55:29.873508 sshd[2303]: Accepted publickey for core from 10.200.16.10 port 57806 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:55:29.875408 sshd[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:29.880606 systemd-logind[1768]: New session 8 of user core. Jul 6 23:55:29.890351 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:55:30.218542 sudo[2308]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:55:30.218927 sudo[2308]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:55:30.222710 sudo[2308]: pam_unix(sudo:session): session closed for user root Jul 6 23:55:30.227861 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 6 23:55:30.228238 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:55:30.243466 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 6 23:55:30.245288 auditctl[2311]: No rules Jul 6 23:55:30.245670 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:55:30.245945 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 6 23:55:30.253667 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:55:30.277253 augenrules[2330]: No rules Jul 6 23:55:30.279037 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:55:30.282966 sudo[2307]: pam_unix(sudo:session): session closed for user root Jul 6 23:55:30.384549 sshd[2303]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:30.390272 systemd[1]: sshd@5-10.200.8.39:22-10.200.16.10:57806.service: Deactivated successfully. Jul 6 23:55:30.393661 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:55:30.394405 systemd-logind[1768]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:55:30.395332 systemd-logind[1768]: Removed session 8. Jul 6 23:55:30.500754 systemd[1]: Started sshd@6-10.200.8.39:22-10.200.16.10:55258.service - OpenSSH per-connection server daemon (10.200.16.10:55258). Jul 6 23:55:31.121438 sshd[2339]: Accepted publickey for core from 10.200.16.10 port 55258 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:55:31.122988 sshd[2339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:31.127173 systemd-logind[1768]: New session 9 of user core. Jul 6 23:55:31.133405 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:55:31.469268 sudo[2343]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:55:31.469661 sudo[2343]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:55:32.679421 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:55:32.682205 (dockerd)[2359]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:55:34.233238 dockerd[2359]: time="2025-07-06T23:55:34.233174000Z" level=info msg="Starting up" Jul 6 23:55:35.044358 systemd[1]: var-lib-docker-metacopy\x2dcheck2119434004-merged.mount: Deactivated successfully. Jul 6 23:55:35.065797 dockerd[2359]: time="2025-07-06T23:55:35.065750700Z" level=info msg="Loading containers: start." Jul 6 23:55:35.213067 chronyd[1785]: Selected source PHC0 Jul 6 23:55:35.230499 kernel: Initializing XFRM netlink socket Jul 6 23:55:35.276422 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:55:35.283318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:35.455322 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:35.458262 (kubelet)[2440]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:55:35.503835 kubelet[2440]: E0706 23:55:35.503784 2440 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:55:35.508395 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:55:35.508622 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:55:35.518904 systemd-networkd[1364]: docker0: Link UP Jul 6 23:55:36.033344 dockerd[2359]: time="2025-07-06T23:55:36.033297478Z" level=info msg="Loading containers: done." Jul 6 23:55:36.094879 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1492873233-merged.mount: Deactivated successfully. Jul 6 23:55:36.103317 dockerd[2359]: time="2025-07-06T23:55:36.103274209Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:55:36.103481 dockerd[2359]: time="2025-07-06T23:55:36.103415537Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 6 23:55:36.103586 dockerd[2359]: time="2025-07-06T23:55:36.103564966Z" level=info msg="Daemon has completed initialization" Jul 6 23:55:36.157008 dockerd[2359]: time="2025-07-06T23:55:36.155782338Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:55:36.156097 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:55:37.452827 containerd[1804]: time="2025-07-06T23:55:37.452769782Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 6 23:55:38.212063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4028210146.mount: Deactivated successfully. Jul 6 23:55:39.788095 containerd[1804]: time="2025-07-06T23:55:39.788032730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:39.791349 containerd[1804]: time="2025-07-06T23:55:39.791302242Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077752" Jul 6 23:55:39.796253 containerd[1804]: time="2025-07-06T23:55:39.796197261Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:39.806839 containerd[1804]: time="2025-07-06T23:55:39.806778300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:39.808516 containerd[1804]: time="2025-07-06T23:55:39.807815704Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.354994622s" Jul 6 23:55:39.808516 containerd[1804]: time="2025-07-06T23:55:39.807859405Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 6 23:55:39.808809 containerd[1804]: time="2025-07-06T23:55:39.808775208Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 6 23:55:41.492639 containerd[1804]: time="2025-07-06T23:55:41.492578416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:41.494677 containerd[1804]: time="2025-07-06T23:55:41.494611023Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713302" Jul 6 23:55:41.500012 containerd[1804]: time="2025-07-06T23:55:41.499956843Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:41.508848 containerd[1804]: time="2025-07-06T23:55:41.508787777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:41.509960 containerd[1804]: time="2025-07-06T23:55:41.509763180Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.700949372s" Jul 6 23:55:41.509960 containerd[1804]: time="2025-07-06T23:55:41.509813080Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 6 23:55:41.510714 containerd[1804]: time="2025-07-06T23:55:41.510498383Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 6 23:55:42.908308 containerd[1804]: time="2025-07-06T23:55:42.908249819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:42.910709 containerd[1804]: time="2025-07-06T23:55:42.910625028Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783679" Jul 6 23:55:42.916233 containerd[1804]: time="2025-07-06T23:55:42.916169949Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:42.922249 containerd[1804]: time="2025-07-06T23:55:42.922186971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:42.923480 containerd[1804]: time="2025-07-06T23:55:42.923203975Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.412549991s" Jul 6 23:55:42.923480 containerd[1804]: time="2025-07-06T23:55:42.923244675Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 6 23:55:42.924193 containerd[1804]: time="2025-07-06T23:55:42.924141879Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 6 23:55:44.291759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3852828756.mount: Deactivated successfully. Jul 6 23:55:44.851523 containerd[1804]: time="2025-07-06T23:55:44.851445032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:44.853862 containerd[1804]: time="2025-07-06T23:55:44.853800142Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383951" Jul 6 23:55:44.858582 containerd[1804]: time="2025-07-06T23:55:44.858528962Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:44.864545 containerd[1804]: time="2025-07-06T23:55:44.864479787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:44.865354 containerd[1804]: time="2025-07-06T23:55:44.865042890Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.940854811s" Jul 6 23:55:44.865354 containerd[1804]: time="2025-07-06T23:55:44.865084690Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 6 23:55:44.865954 containerd[1804]: time="2025-07-06T23:55:44.865929494Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:55:45.482428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount341568915.mount: Deactivated successfully. Jul 6 23:55:45.543948 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 6 23:55:45.549377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:45.819348 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:45.824495 (kubelet)[2605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:55:46.331173 kubelet[2605]: E0706 23:55:46.331088 2605 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:55:46.333831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:55:46.334854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:55:47.520163 containerd[1804]: time="2025-07-06T23:55:47.520093380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:47.522282 containerd[1804]: time="2025-07-06T23:55:47.522220189Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 6 23:55:47.526252 containerd[1804]: time="2025-07-06T23:55:47.526199506Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:47.530871 containerd[1804]: time="2025-07-06T23:55:47.530830126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:47.532068 containerd[1804]: time="2025-07-06T23:55:47.531918031Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.665956537s" Jul 6 23:55:47.532068 containerd[1804]: time="2025-07-06T23:55:47.531958431Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 6 23:55:47.533006 containerd[1804]: time="2025-07-06T23:55:47.532816835Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:55:48.078743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2760730303.mount: Deactivated successfully. Jul 6 23:55:48.099465 containerd[1804]: time="2025-07-06T23:55:48.099420166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:48.101988 containerd[1804]: time="2025-07-06T23:55:48.101931476Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 6 23:55:48.107279 containerd[1804]: time="2025-07-06T23:55:48.107227099Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:48.113254 containerd[1804]: time="2025-07-06T23:55:48.113193925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:48.114572 containerd[1804]: time="2025-07-06T23:55:48.113992428Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 581.138893ms" Jul 6 23:55:48.114572 containerd[1804]: time="2025-07-06T23:55:48.114034728Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:55:48.117890 containerd[1804]: time="2025-07-06T23:55:48.117863845Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 6 23:55:48.751832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585938937.mount: Deactivated successfully. Jul 6 23:55:51.029827 containerd[1804]: time="2025-07-06T23:55:51.029762837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:51.032813 containerd[1804]: time="2025-07-06T23:55:51.032666649Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Jul 6 23:55:51.036305 containerd[1804]: time="2025-07-06T23:55:51.036252065Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:51.042081 containerd[1804]: time="2025-07-06T23:55:51.042022489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:51.043344 containerd[1804]: time="2025-07-06T23:55:51.043183594Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.925281449s" Jul 6 23:55:51.043344 containerd[1804]: time="2025-07-06T23:55:51.043223095Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 6 23:55:52.193240 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jul 6 23:55:53.728302 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:53.736383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:53.778282 systemd[1]: Reloading requested from client PID 2744 ('systemctl') (unit session-9.scope)... Jul 6 23:55:53.778299 systemd[1]: Reloading... Jul 6 23:55:53.879175 zram_generator::config[2784]: No configuration found. Jul 6 23:55:54.033990 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:54.114001 systemd[1]: Reloading finished in 335 ms. Jul 6 23:55:54.152595 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:55:54.152706 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:55:54.153071 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:54.155419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:54.504328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:54.516528 (kubelet)[2863]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:55:55.227113 kubelet[2863]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:55:55.227113 kubelet[2863]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:55:55.227113 kubelet[2863]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:55:55.227113 kubelet[2863]: I0706 23:55:55.226872 2863 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:55:55.381073 kubelet[2863]: I0706 23:55:55.381024 2863 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:55:55.381073 kubelet[2863]: I0706 23:55:55.381058 2863 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:55:55.381416 kubelet[2863]: I0706 23:55:55.381393 2863 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:55:55.404429 kubelet[2863]: E0706 23:55:55.404384 2863 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:55.410609 kubelet[2863]: I0706 23:55:55.410000 2863 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:55:55.418709 kubelet[2863]: E0706 23:55:55.418673 2863 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:55:55.418827 kubelet[2863]: I0706 23:55:55.418765 2863 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:55:55.423357 kubelet[2863]: I0706 23:55:55.423329 2863 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:55:55.424347 kubelet[2863]: I0706 23:55:55.424326 2863 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:55:55.424532 kubelet[2863]: I0706 23:55:55.424493 2863 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:55:55.424709 kubelet[2863]: I0706 23:55:55.424529 2863 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-a-fe0535f741","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 6 23:55:55.424875 kubelet[2863]: I0706 23:55:55.424726 2863 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:55:55.424875 kubelet[2863]: I0706 23:55:55.424740 2863 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:55:55.424875 kubelet[2863]: I0706 23:55:55.424873 2863 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:55:55.427982 kubelet[2863]: I0706 23:55:55.427698 2863 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:55:55.427982 kubelet[2863]: I0706 23:55:55.427732 2863 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:55:55.427982 kubelet[2863]: I0706 23:55:55.427776 2863 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:55:55.427982 kubelet[2863]: I0706 23:55:55.427797 2863 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:55:55.429517 kubelet[2863]: W0706 23:55:55.429334 2863 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-fe0535f741&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 6 23:55:55.429517 kubelet[2863]: E0706 23:55:55.429403 2863 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-fe0535f741&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:55.430626 kubelet[2863]: W0706 23:55:55.430497 2863 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 6 23:55:55.430626 kubelet[2863]: E0706 23:55:55.430538 2863 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:55.431648 kubelet[2863]: I0706 23:55:55.431005 2863 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:55:55.431648 kubelet[2863]: I0706 23:55:55.431508 2863 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:55:55.432906 kubelet[2863]: W0706 23:55:55.432401 2863 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:55:55.435539 kubelet[2863]: I0706 23:55:55.435361 2863 server.go:1274] "Started kubelet" Jul 6 23:55:55.436626 kubelet[2863]: I0706 23:55:55.436212 2863 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:55:55.437523 kubelet[2863]: I0706 23:55:55.437393 2863 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:55:55.440806 kubelet[2863]: I0706 23:55:55.440753 2863 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:55:55.441374 kubelet[2863]: I0706 23:55:55.441114 2863 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:55:55.442236 kubelet[2863]: I0706 23:55:55.441623 2863 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:55:55.443255 kubelet[2863]: E0706 23:55:55.441297 2863 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.39:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.4-a-fe0535f741.184fcecbf3db5c76 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.4-a-fe0535f741,UID:ci-4081.3.4-a-fe0535f741,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.4-a-fe0535f741,},FirstTimestamp:2025-07-06 23:55:55.435334774 +0000 UTC m=+0.915396447,LastTimestamp:2025-07-06 23:55:55.435334774 +0000 UTC m=+0.915396447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.4-a-fe0535f741,}" Jul 6 23:55:55.446292 kubelet[2863]: E0706 23:55:55.446272 2863 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:55:55.446638 kubelet[2863]: I0706 23:55:55.446619 2863 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:55:55.448644 kubelet[2863]: I0706 23:55:55.448624 2863 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:55:55.448875 kubelet[2863]: E0706 23:55:55.448851 2863 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-fe0535f741\" not found" Jul 6 23:55:55.450185 kubelet[2863]: E0706 23:55:55.450148 2863 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-fe0535f741?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="200ms" Jul 6 23:55:55.450842 kubelet[2863]: I0706 23:55:55.450390 2863 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:55:55.450842 kubelet[2863]: I0706 23:55:55.450432 2863 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:55:55.450842 kubelet[2863]: W0706 23:55:55.450749 2863 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 6 23:55:55.450842 kubelet[2863]: E0706 23:55:55.450802 2863 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:55.451645 kubelet[2863]: I0706 23:55:55.451628 2863 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:55:55.451822 kubelet[2863]: I0706 23:55:55.451801 2863 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:55:55.453364 kubelet[2863]: I0706 23:55:55.453345 2863 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:55:55.481204 kubelet[2863]: I0706 23:55:55.480236 2863 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:55:55.484276 kubelet[2863]: I0706 23:55:55.483936 2863 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:55:55.484276 kubelet[2863]: I0706 23:55:55.483960 2863 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:55:55.484276 kubelet[2863]: I0706 23:55:55.483978 2863 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:55:55.484276 kubelet[2863]: E0706 23:55:55.484019 2863 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:55:55.488371 kubelet[2863]: W0706 23:55:55.488091 2863 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 6 23:55:55.488371 kubelet[2863]: E0706 23:55:55.488154 2863 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:55.488900 kubelet[2863]: I0706 23:55:55.488875 2863 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:55:55.488900 kubelet[2863]: I0706 23:55:55.488894 2863 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:55:55.489018 kubelet[2863]: I0706 23:55:55.488913 2863 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:55:55.495488 kubelet[2863]: I0706 23:55:55.495466 2863 policy_none.go:49] "None policy: Start" Jul 6 23:55:55.496184 kubelet[2863]: I0706 23:55:55.496145 2863 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:55:55.496184 kubelet[2863]: I0706 23:55:55.496174 2863 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:55:55.503003 kubelet[2863]: I0706 23:55:55.502974 2863 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:55:55.503210 kubelet[2863]: I0706 23:55:55.503192 2863 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:55:55.503276 kubelet[2863]: I0706 23:55:55.503208 2863 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:55:55.504358 kubelet[2863]: I0706 23:55:55.504333 2863 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:55:55.507553 kubelet[2863]: E0706 23:55:55.507526 2863 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.4-a-fe0535f741\" not found" Jul 6 23:55:55.604838 kubelet[2863]: I0706 23:55:55.604793 2863 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-fe0535f741" Jul 6 23:55:55.605194 kubelet[2863]: E0706 23:55:55.605164 2863 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4081.3.4-a-fe0535f741" Jul 6 23:55:55.650988 kubelet[2863]: E0706 23:55:55.650870 2863 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-fe0535f741?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="400ms" Jul 6 23:55:55.751755 kubelet[2863]: I0706 23:55:55.751148 2863 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c89eb423ee73d743b17dee73cf981800-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-fe0535f741\" (UID: \"c89eb423ee73d743b17dee73cf981800\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fe0535f741" Jul 6 23:55:55.751755 kubelet[2863]: I0706 23:55:55.751199 2863 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c89eb423ee73d743b17dee73cf981800-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-fe0535f741\" (UID: \"c89eb423ee73d743b17dee73cf981800\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fe0535f741" Jul 6 23:55:55.751755 kubelet[2863]: I0706 23:55:55.751224 2863 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c89eb423ee73d743b17dee73cf981800-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-a-fe0535f741\" (UID: \"c89eb423ee73d743b17dee73cf981800\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fe0535f741" Jul 6 23:55:55.751755 kubelet[2863]: I0706 23:55:55.751243 2863 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d1cf280b5de532b2bfe1d1edbbb95d8-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-a-fe0535f741\" (UID: \"0d1cf280b5de532b2bfe1d1edbbb95d8\") " pod="kube-system/kube-scheduler-ci-4081.3.4-a-fe0535f741" Jul 6 23:55:55.751755 kubelet[2863]: I0706 23:55:55.751265 2863 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ba27b2eef6698fa2441dfb3253e5147a-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-a-fe0535f741\" (UID: \"ba27b2eef6698fa2441dfb3253e5147a\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-fe0535f741" Jul 6 23:55:55.752058 kubelet[2863]: I0706 23:55:55.751287 2863 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba27b2eef6698fa2441dfb3253e5147a-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-a-fe0535f741\" (UID: \"ba27b2eef6698fa2441dfb3253e5147a\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-fe0535f741" Jul 6 23:55:55.752058 kubelet[2863]: I0706 23:55:55.751314 2863 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c89eb423ee73d743b17dee73cf981800-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-a-fe0535f741\" (UID: \"c89eb423ee73d743b17dee73cf981800\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fe0535f741" Jul 6 23:55:55.752058 kubelet[2863]: I0706 23:55:55.751346 2863 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c89eb423ee73d743b17dee73cf981800-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-a-fe0535f741\" (UID: \"c89eb423ee73d743b17dee73cf981800\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fe0535f741" Jul 6 23:55:55.752058 kubelet[2863]: I0706 23:55:55.751381 2863 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba27b2eef6698fa2441dfb3253e5147a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-a-fe0535f741\" (UID: \"ba27b2eef6698fa2441dfb3253e5147a\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-fe0535f741" Jul 6 23:55:55.807587 kubelet[2863]: I0706 23:55:55.807550 2863 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-fe0535f741" Jul 6 23:55:55.808070 kubelet[2863]: E0706 23:55:55.808017 2863 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4081.3.4-a-fe0535f741" Jul 6 23:55:55.893737 containerd[1804]: time="2025-07-06T23:55:55.893683899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-a-fe0535f741,Uid:ba27b2eef6698fa2441dfb3253e5147a,Namespace:kube-system,Attempt:0,}" Jul 6 23:55:55.897353 containerd[1804]: time="2025-07-06T23:55:55.897314510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-a-fe0535f741,Uid:c89eb423ee73d743b17dee73cf981800,Namespace:kube-system,Attempt:0,}" Jul 6 23:55:55.898842 containerd[1804]: time="2025-07-06T23:55:55.898786215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-a-fe0535f741,Uid:0d1cf280b5de532b2bfe1d1edbbb95d8,Namespace:kube-system,Attempt:0,}" Jul 6 23:55:56.052404 kubelet[2863]: E0706 23:55:56.052351 2863 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-fe0535f741?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="800ms" Jul 6 23:55:56.210022 kubelet[2863]: I0706 23:55:56.209987 2863 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-fe0535f741" Jul 6 23:55:56.210362 kubelet[2863]: E0706 23:55:56.210330 2863 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4081.3.4-a-fe0535f741" Jul 6 23:55:56.265451 kubelet[2863]: W0706 23:55:56.265380 2863 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 6 23:55:56.265886 kubelet[2863]: E0706 23:55:56.265461 2863 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:56.309368 kubelet[2863]: W0706 23:55:56.309229 2863 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 6 23:55:56.309368 kubelet[2863]: E0706 23:55:56.309284 2863 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:56.456497 update_engine[1775]: I20250706 23:55:56.456421 1775 update_attempter.cc:509] Updating boot flags... Jul 6 23:55:56.474469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3689009906.mount: Deactivated successfully. Jul 6 23:55:56.507180 containerd[1804]: time="2025-07-06T23:55:56.506730005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:55:56.516185 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2909) Jul 6 23:55:56.517617 containerd[1804]: time="2025-07-06T23:55:56.517559138Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:55:56.522721 containerd[1804]: time="2025-07-06T23:55:56.522676154Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:55:56.528710 containerd[1804]: time="2025-07-06T23:55:56.528661873Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:55:56.545246 containerd[1804]: time="2025-07-06T23:55:56.545183724Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jul 6 23:55:56.552198 containerd[1804]: time="2025-07-06T23:55:56.550870142Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:55:56.559040 containerd[1804]: time="2025-07-06T23:55:56.558991567Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:55:56.564065 containerd[1804]: time="2025-07-06T23:55:56.563970382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:55:56.570018 containerd[1804]: time="2025-07-06T23:55:56.569972801Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 672.568191ms" Jul 6 23:55:56.595230 containerd[1804]: time="2025-07-06T23:55:56.595189580Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 696.333065ms" Jul 6 23:55:56.603461 containerd[1804]: time="2025-07-06T23:55:56.603421405Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 709.637006ms" Jul 6 23:55:56.728461 kubelet[2863]: W0706 23:55:56.728395 2863 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 6 23:55:56.728630 kubelet[2863]: E0706 23:55:56.728475 2863 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:56.801692 kubelet[2863]: W0706 23:55:56.801619 2863 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-fe0535f741&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 6 23:55:56.801862 kubelet[2863]: E0706 23:55:56.801701 2863 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-fe0535f741&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:56.853090 kubelet[2863]: E0706 23:55:56.852873 2863 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-fe0535f741?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="1.6s" Jul 6 23:55:57.014265 kubelet[2863]: I0706 23:55:57.014228 2863 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-fe0535f741" Jul 6 23:55:57.014624 kubelet[2863]: E0706 23:55:57.014594 2863 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4081.3.4-a-fe0535f741" Jul 6 23:55:57.087817 kubelet[2863]: E0706 23:55:57.087671 2863 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.39:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.4-a-fe0535f741.184fcecbf3db5c76 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.4-a-fe0535f741,UID:ci-4081.3.4-a-fe0535f741,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.4-a-fe0535f741,},FirstTimestamp:2025-07-06 23:55:55.435334774 +0000 UTC m=+0.915396447,LastTimestamp:2025-07-06 23:55:55.435334774 +0000 UTC m=+0.915396447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.4-a-fe0535f741,}" Jul 6 23:55:57.360303 containerd[1804]: time="2025-07-06T23:55:57.360188658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:55:57.360785 containerd[1804]: time="2025-07-06T23:55:57.360302658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:55:57.360785 containerd[1804]: time="2025-07-06T23:55:57.360339258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:57.361460 containerd[1804]: time="2025-07-06T23:55:57.361166461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:57.364659 containerd[1804]: time="2025-07-06T23:55:57.364411771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:55:57.364659 containerd[1804]: time="2025-07-06T23:55:57.364475671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:55:57.364659 containerd[1804]: time="2025-07-06T23:55:57.364494571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:57.364659 containerd[1804]: time="2025-07-06T23:55:57.363888469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:55:57.364659 containerd[1804]: time="2025-07-06T23:55:57.363947469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:55:57.364659 containerd[1804]: time="2025-07-06T23:55:57.363970270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:57.364939 containerd[1804]: time="2025-07-06T23:55:57.364753972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:57.365891 containerd[1804]: time="2025-07-06T23:55:57.365432474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:57.482322 containerd[1804]: time="2025-07-06T23:55:57.482271737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-a-fe0535f741,Uid:ba27b2eef6698fa2441dfb3253e5147a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d72ba789c8ec159ba9e674f93bf821102f47349a063618744d2967630a75ad68\"" Jul 6 23:55:57.494112 containerd[1804]: time="2025-07-06T23:55:57.494059974Z" level=info msg="CreateContainer within sandbox \"d72ba789c8ec159ba9e674f93bf821102f47349a063618744d2967630a75ad68\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:55:57.497436 containerd[1804]: time="2025-07-06T23:55:57.497392084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-a-fe0535f741,Uid:c89eb423ee73d743b17dee73cf981800,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd4580e225d2e22aac5842131c2d41fe9de3292ba0cb1571190cfd374fcd5bd2\"" Jul 6 23:55:57.501137 containerd[1804]: time="2025-07-06T23:55:57.501063996Z" level=info msg="CreateContainer within sandbox \"fd4580e225d2e22aac5842131c2d41fe9de3292ba0cb1571190cfd374fcd5bd2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:55:57.505999 containerd[1804]: time="2025-07-06T23:55:57.505968211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-a-fe0535f741,Uid:0d1cf280b5de532b2bfe1d1edbbb95d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"250768a54c53b6599e66249d9e3824b7e9d0e8f9d4c570f5fe0c8d5bfc327660\"" Jul 6 23:55:57.508306 containerd[1804]: time="2025-07-06T23:55:57.508253718Z" level=info msg="CreateContainer within sandbox \"250768a54c53b6599e66249d9e3824b7e9d0e8f9d4c570f5fe0c8d5bfc327660\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:55:57.578750 kubelet[2863]: E0706 23:55:57.578688 2863 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:57.580458 containerd[1804]: time="2025-07-06T23:55:57.580411342Z" level=info msg="CreateContainer within sandbox \"d72ba789c8ec159ba9e674f93bf821102f47349a063618744d2967630a75ad68\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bb6ec34f28fa6069056f3055cb42540316d209503dd47d3e4fc6b0b4d2eddd7d\"" Jul 6 23:55:57.581372 containerd[1804]: time="2025-07-06T23:55:57.581201745Z" level=info msg="StartContainer for \"bb6ec34f28fa6069056f3055cb42540316d209503dd47d3e4fc6b0b4d2eddd7d\"" Jul 6 23:55:57.585589 containerd[1804]: time="2025-07-06T23:55:57.585548858Z" level=info msg="CreateContainer within sandbox \"250768a54c53b6599e66249d9e3824b7e9d0e8f9d4c570f5fe0c8d5bfc327660\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f10309cc6a378adb9ac95165699d3ca9ef3492bc61859c57d6a8db691942c949\"" Jul 6 23:55:57.585956 containerd[1804]: time="2025-07-06T23:55:57.585930560Z" level=info msg="StartContainer for \"f10309cc6a378adb9ac95165699d3ca9ef3492bc61859c57d6a8db691942c949\"" Jul 6 23:55:57.587730 containerd[1804]: time="2025-07-06T23:55:57.587697165Z" level=info msg="CreateContainer within sandbox \"fd4580e225d2e22aac5842131c2d41fe9de3292ba0cb1571190cfd374fcd5bd2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5c5f25be1856b664903b33734d2f00cf83e60c466c33f68d8e293c26ee298685\"" Jul 6 23:55:57.588057 containerd[1804]: time="2025-07-06T23:55:57.588033066Z" level=info msg="StartContainer for \"5c5f25be1856b664903b33734d2f00cf83e60c466c33f68d8e293c26ee298685\"" Jul 6 23:55:57.727216 containerd[1804]: time="2025-07-06T23:55:57.724928492Z" level=info msg="StartContainer for \"5c5f25be1856b664903b33734d2f00cf83e60c466c33f68d8e293c26ee298685\" returns successfully" Jul 6 23:55:57.727216 containerd[1804]: time="2025-07-06T23:55:57.725249993Z" level=info msg="StartContainer for \"bb6ec34f28fa6069056f3055cb42540316d209503dd47d3e4fc6b0b4d2eddd7d\" returns successfully" Jul 6 23:55:57.776747 containerd[1804]: time="2025-07-06T23:55:57.776693853Z" level=info msg="StartContainer for \"f10309cc6a378adb9ac95165699d3ca9ef3492bc61859c57d6a8db691942c949\" returns successfully" Jul 6 23:55:58.618140 kubelet[2863]: I0706 23:55:58.618092 2863 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:00.244230 kubelet[2863]: E0706 23:56:00.244168 2863 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.4-a-fe0535f741\" not found" node="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:00.280814 kubelet[2863]: I0706 23:56:00.280766 2863 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:00.433546 kubelet[2863]: I0706 23:56:00.433399 2863 apiserver.go:52] "Watching apiserver" Jul 6 23:56:00.451488 kubelet[2863]: I0706 23:56:00.451443 2863 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:56:00.533492 kubelet[2863]: E0706 23:56:00.533343 2863 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.4-a-fe0535f741\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.4-a-fe0535f741" Jul 6 23:56:02.662375 systemd[1]: Reloading requested from client PID 3175 ('systemctl') (unit session-9.scope)... Jul 6 23:56:02.662395 systemd[1]: Reloading... Jul 6 23:56:02.753151 zram_generator::config[3218]: No configuration found. Jul 6 23:56:02.880810 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:56:02.968969 systemd[1]: Reloading finished in 306 ms. Jul 6 23:56:03.008424 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:03.024828 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:56:03.025601 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:03.035658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:03.227294 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:03.238560 (kubelet)[3292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:56:03.800905 kubelet[3292]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:56:03.800905 kubelet[3292]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:56:03.800905 kubelet[3292]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:56:03.801476 kubelet[3292]: I0706 23:56:03.800987 3292 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:56:03.808962 kubelet[3292]: I0706 23:56:03.807823 3292 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:56:03.808962 kubelet[3292]: I0706 23:56:03.807852 3292 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:56:03.808962 kubelet[3292]: I0706 23:56:03.808137 3292 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:56:03.811888 kubelet[3292]: I0706 23:56:03.811091 3292 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:56:03.817008 kubelet[3292]: I0706 23:56:03.816651 3292 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:56:03.820979 kubelet[3292]: E0706 23:56:03.820850 3292 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:56:03.821084 kubelet[3292]: I0706 23:56:03.821007 3292 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:56:03.825676 kubelet[3292]: I0706 23:56:03.824950 3292 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:56:03.825676 kubelet[3292]: I0706 23:56:03.825495 3292 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:56:03.825676 kubelet[3292]: I0706 23:56:03.825639 3292 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:56:03.825899 kubelet[3292]: I0706 23:56:03.825672 3292 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-a-fe0535f741","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 6 23:56:03.826054 kubelet[3292]: I0706 23:56:03.825917 3292 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:56:03.826054 kubelet[3292]: I0706 23:56:03.825932 3292 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:56:03.826054 kubelet[3292]: I0706 23:56:03.825966 3292 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:56:03.826193 kubelet[3292]: I0706 23:56:03.826091 3292 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:56:03.826193 kubelet[3292]: I0706 23:56:03.826107 3292 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:56:03.826372 kubelet[3292]: I0706 23:56:03.826342 3292 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:56:03.826698 kubelet[3292]: I0706 23:56:03.826452 3292 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:56:03.829475 kubelet[3292]: I0706 23:56:03.829456 3292 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:56:03.831135 kubelet[3292]: I0706 23:56:03.831092 3292 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:56:03.832693 kubelet[3292]: I0706 23:56:03.831676 3292 server.go:1274] "Started kubelet" Jul 6 23:56:03.835873 kubelet[3292]: I0706 23:56:03.834775 3292 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:56:03.838342 kubelet[3292]: I0706 23:56:03.838283 3292 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:56:03.840884 kubelet[3292]: I0706 23:56:03.840859 3292 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:56:03.848893 kubelet[3292]: I0706 23:56:03.845488 3292 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:56:03.848893 kubelet[3292]: I0706 23:56:03.847329 3292 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:56:03.848893 kubelet[3292]: I0706 23:56:03.847679 3292 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:56:03.868151 kubelet[3292]: I0706 23:56:03.856128 3292 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:56:03.868151 kubelet[3292]: I0706 23:56:03.856186 3292 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:56:03.868151 kubelet[3292]: I0706 23:56:03.867049 3292 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:56:03.868151 kubelet[3292]: E0706 23:56:03.856376 3292 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-fe0535f741\" not found" Jul 6 23:56:03.868151 kubelet[3292]: I0706 23:56:03.867602 3292 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:56:03.868151 kubelet[3292]: I0706 23:56:03.867718 3292 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:56:03.878765 kubelet[3292]: I0706 23:56:03.876397 3292 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:56:03.885020 kubelet[3292]: I0706 23:56:03.884982 3292 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:56:03.886753 kubelet[3292]: I0706 23:56:03.886734 3292 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:56:03.886983 kubelet[3292]: I0706 23:56:03.886911 3292 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:56:03.886983 kubelet[3292]: I0706 23:56:03.886939 3292 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:56:03.887503 kubelet[3292]: E0706 23:56:03.887194 3292 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:56:03.968039 kubelet[3292]: I0706 23:56:03.968012 3292 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:56:03.968232 kubelet[3292]: I0706 23:56:03.968220 3292 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:56:03.968305 kubelet[3292]: I0706 23:56:03.968297 3292 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:56:03.968571 kubelet[3292]: I0706 23:56:03.968557 3292 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:56:03.968729 kubelet[3292]: I0706 23:56:03.968634 3292 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:56:03.968729 kubelet[3292]: I0706 23:56:03.968661 3292 policy_none.go:49] "None policy: Start" Jul 6 23:56:03.969808 kubelet[3292]: I0706 23:56:03.969512 3292 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:56:03.969808 kubelet[3292]: I0706 23:56:03.969535 3292 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:56:03.969808 kubelet[3292]: I0706 23:56:03.969740 3292 state_mem.go:75] "Updated machine memory state" Jul 6 23:56:03.971414 kubelet[3292]: I0706 23:56:03.971395 3292 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:56:03.973552 kubelet[3292]: I0706 23:56:03.972332 3292 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:56:03.973552 kubelet[3292]: I0706 23:56:03.972353 3292 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:56:03.974408 kubelet[3292]: I0706 23:56:03.974394 3292 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:56:04.000384 kubelet[3292]: W0706 23:56:04.000343 3292 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:56:04.007564 kubelet[3292]: W0706 23:56:04.007223 3292 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:56:04.007564 kubelet[3292]: W0706 23:56:04.007428 3292 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:56:04.068328 kubelet[3292]: I0706 23:56:04.067889 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ba27b2eef6698fa2441dfb3253e5147a-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-a-fe0535f741\" (UID: \"ba27b2eef6698fa2441dfb3253e5147a\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-fe0535f741" Jul 6 23:56:04.068328 kubelet[3292]: I0706 23:56:04.067941 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba27b2eef6698fa2441dfb3253e5147a-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-a-fe0535f741\" (UID: \"ba27b2eef6698fa2441dfb3253e5147a\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-fe0535f741" Jul 6 23:56:04.068328 kubelet[3292]: I0706 23:56:04.067980 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba27b2eef6698fa2441dfb3253e5147a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-a-fe0535f741\" (UID: \"ba27b2eef6698fa2441dfb3253e5147a\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-fe0535f741" Jul 6 23:56:04.068328 kubelet[3292]: I0706 23:56:04.068044 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c89eb423ee73d743b17dee73cf981800-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-fe0535f741\" (UID: \"c89eb423ee73d743b17dee73cf981800\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fe0535f741" Jul 6 23:56:04.068328 kubelet[3292]: I0706 23:56:04.068070 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c89eb423ee73d743b17dee73cf981800-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-a-fe0535f741\" (UID: \"c89eb423ee73d743b17dee73cf981800\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fe0535f741" Jul 6 23:56:04.068681 kubelet[3292]: I0706 23:56:04.068092 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c89eb423ee73d743b17dee73cf981800-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-fe0535f741\" (UID: \"c89eb423ee73d743b17dee73cf981800\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fe0535f741" Jul 6 23:56:04.068681 kubelet[3292]: I0706 23:56:04.068134 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c89eb423ee73d743b17dee73cf981800-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-a-fe0535f741\" (UID: \"c89eb423ee73d743b17dee73cf981800\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fe0535f741" Jul 6 23:56:04.068681 kubelet[3292]: I0706 23:56:04.068159 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c89eb423ee73d743b17dee73cf981800-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-a-fe0535f741\" (UID: \"c89eb423ee73d743b17dee73cf981800\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fe0535f741" Jul 6 23:56:04.068681 kubelet[3292]: I0706 23:56:04.068185 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d1cf280b5de532b2bfe1d1edbbb95d8-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-a-fe0535f741\" (UID: \"0d1cf280b5de532b2bfe1d1edbbb95d8\") " pod="kube-system/kube-scheduler-ci-4081.3.4-a-fe0535f741" Jul 6 23:56:04.087047 kubelet[3292]: I0706 23:56:04.085859 3292 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:04.098687 kubelet[3292]: I0706 23:56:04.098650 3292 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:04.099139 kubelet[3292]: I0706 23:56:04.098924 3292 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:04.841934 kubelet[3292]: I0706 23:56:04.841896 3292 apiserver.go:52] "Watching apiserver" Jul 6 23:56:04.867493 kubelet[3292]: I0706 23:56:04.867444 3292 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:56:04.936811 kubelet[3292]: W0706 23:56:04.936467 3292 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:56:04.936811 kubelet[3292]: E0706 23:56:04.936555 3292 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.4-a-fe0535f741\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.4-a-fe0535f741" Jul 6 23:56:04.950646 kubelet[3292]: I0706 23:56:04.950083 3292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.4-a-fe0535f741" podStartSLOduration=1.950066129 podStartE2EDuration="1.950066129s" podCreationTimestamp="2025-07-06 23:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:04.949376425 +0000 UTC m=+1.705708166" watchObservedRunningTime="2025-07-06 23:56:04.950066129 +0000 UTC m=+1.706397970" Jul 6 23:56:04.975786 kubelet[3292]: I0706 23:56:04.975460 3292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.4-a-fe0535f741" podStartSLOduration=1.9754383720000002 podStartE2EDuration="1.975438372s" podCreationTimestamp="2025-07-06 23:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:04.962841601 +0000 UTC m=+1.719173442" watchObservedRunningTime="2025-07-06 23:56:04.975438372 +0000 UTC m=+1.731770113" Jul 6 23:56:04.985888 kubelet[3292]: I0706 23:56:04.985633 3292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fe0535f741" podStartSLOduration=1.9855915290000001 podStartE2EDuration="1.985591529s" podCreationTimestamp="2025-07-06 23:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:04.975666673 +0000 UTC m=+1.731998414" watchObservedRunningTime="2025-07-06 23:56:04.985591529 +0000 UTC m=+1.741923370" Jul 6 23:56:07.837171 kubelet[3292]: I0706 23:56:07.837107 3292 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:56:07.837847 containerd[1804]: time="2025-07-06T23:56:07.837780951Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:56:07.838250 kubelet[3292]: I0706 23:56:07.838005 3292 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:56:08.894745 kubelet[3292]: I0706 23:56:08.894697 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20348263-add4-4e78-b2ff-40c96040736c-xtables-lock\") pod \"kube-proxy-dcrbc\" (UID: \"20348263-add4-4e78-b2ff-40c96040736c\") " pod="kube-system/kube-proxy-dcrbc" Jul 6 23:56:08.895373 kubelet[3292]: I0706 23:56:08.894781 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20348263-add4-4e78-b2ff-40c96040736c-lib-modules\") pod \"kube-proxy-dcrbc\" (UID: \"20348263-add4-4e78-b2ff-40c96040736c\") " pod="kube-system/kube-proxy-dcrbc" Jul 6 23:56:08.895373 kubelet[3292]: I0706 23:56:08.894809 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjgff\" (UniqueName: \"kubernetes.io/projected/20348263-add4-4e78-b2ff-40c96040736c-kube-api-access-fjgff\") pod \"kube-proxy-dcrbc\" (UID: \"20348263-add4-4e78-b2ff-40c96040736c\") " pod="kube-system/kube-proxy-dcrbc" Jul 6 23:56:08.895373 kubelet[3292]: I0706 23:56:08.894834 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/20348263-add4-4e78-b2ff-40c96040736c-kube-proxy\") pod \"kube-proxy-dcrbc\" (UID: \"20348263-add4-4e78-b2ff-40c96040736c\") " pod="kube-system/kube-proxy-dcrbc" Jul 6 23:56:09.095991 kubelet[3292]: I0706 23:56:09.095951 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95pr8\" (UniqueName: \"kubernetes.io/projected/01f41712-a9c4-403b-8be2-13bf0e4ba295-kube-api-access-95pr8\") pod \"tigera-operator-5bf8dfcb4-ttm47\" (UID: \"01f41712-a9c4-403b-8be2-13bf0e4ba295\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-ttm47" Jul 6 23:56:09.095991 kubelet[3292]: I0706 23:56:09.095994 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/01f41712-a9c4-403b-8be2-13bf0e4ba295-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-ttm47\" (UID: \"01f41712-a9c4-403b-8be2-13bf0e4ba295\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-ttm47" Jul 6 23:56:09.131729 containerd[1804]: time="2025-07-06T23:56:09.131689640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dcrbc,Uid:20348263-add4-4e78-b2ff-40c96040736c,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:09.171341 containerd[1804]: time="2025-07-06T23:56:09.171081865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:09.171341 containerd[1804]: time="2025-07-06T23:56:09.171232965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:09.171341 containerd[1804]: time="2025-07-06T23:56:09.171312966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:09.171732 containerd[1804]: time="2025-07-06T23:56:09.171501267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:09.230016 containerd[1804]: time="2025-07-06T23:56:09.229909901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dcrbc,Uid:20348263-add4-4e78-b2ff-40c96040736c,Namespace:kube-system,Attempt:0,} returns sandbox id \"27485242c4b54263633909a4629be607cedd7bbe553bc02acaa4d28e9af0322c\"" Jul 6 23:56:09.233599 containerd[1804]: time="2025-07-06T23:56:09.233562821Z" level=info msg="CreateContainer within sandbox \"27485242c4b54263633909a4629be607cedd7bbe553bc02acaa4d28e9af0322c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:56:09.272286 containerd[1804]: time="2025-07-06T23:56:09.272235742Z" level=info msg="CreateContainer within sandbox \"27485242c4b54263633909a4629be607cedd7bbe553bc02acaa4d28e9af0322c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"79f6ba9c5d85420f4a24daec873843beca5b313e70364317565b1a07b4c94180\"" Jul 6 23:56:09.273336 containerd[1804]: time="2025-07-06T23:56:09.273307648Z" level=info msg="StartContainer for \"79f6ba9c5d85420f4a24daec873843beca5b313e70364317565b1a07b4c94180\"" Jul 6 23:56:09.334458 containerd[1804]: time="2025-07-06T23:56:09.334224596Z" level=info msg="StartContainer for \"79f6ba9c5d85420f4a24daec873843beca5b313e70364317565b1a07b4c94180\" returns successfully" Jul 6 23:56:09.360469 containerd[1804]: time="2025-07-06T23:56:09.360420346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-ttm47,Uid:01f41712-a9c4-403b-8be2-13bf0e4ba295,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:56:09.414227 containerd[1804]: time="2025-07-06T23:56:09.414105252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:09.414227 containerd[1804]: time="2025-07-06T23:56:09.414173153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:09.414489 containerd[1804]: time="2025-07-06T23:56:09.414197553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:09.414489 containerd[1804]: time="2025-07-06T23:56:09.414300454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:09.483213 containerd[1804]: time="2025-07-06T23:56:09.481952840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-ttm47,Uid:01f41712-a9c4-403b-8be2-13bf0e4ba295,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4fcbac7757f23b58d8d7c010a2d7c993f80e88a4144dccbc851a288b13cee36b\"" Jul 6 23:56:09.486893 containerd[1804]: time="2025-07-06T23:56:09.485905562Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:56:10.851591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1473193329.mount: Deactivated successfully. Jul 6 23:56:11.503957 containerd[1804]: time="2025-07-06T23:56:11.503901687Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:11.506831 containerd[1804]: time="2025-07-06T23:56:11.506767803Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 6 23:56:11.510430 containerd[1804]: time="2025-07-06T23:56:11.510375224Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:11.515564 containerd[1804]: time="2025-07-06T23:56:11.515513753Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:11.516801 containerd[1804]: time="2025-07-06T23:56:11.516180957Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.030233594s" Jul 6 23:56:11.516801 containerd[1804]: time="2025-07-06T23:56:11.516223357Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 6 23:56:11.518509 containerd[1804]: time="2025-07-06T23:56:11.518329869Z" level=info msg="CreateContainer within sandbox \"4fcbac7757f23b58d8d7c010a2d7c993f80e88a4144dccbc851a288b13cee36b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:56:11.557310 containerd[1804]: time="2025-07-06T23:56:11.557263591Z" level=info msg="CreateContainer within sandbox \"4fcbac7757f23b58d8d7c010a2d7c993f80e88a4144dccbc851a288b13cee36b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d6bce789e2b7aa0e9097bda4eb3a2ae2aafb0c903f5de2003a4516cf603ace02\"" Jul 6 23:56:11.557949 containerd[1804]: time="2025-07-06T23:56:11.557851095Z" level=info msg="StartContainer for \"d6bce789e2b7aa0e9097bda4eb3a2ae2aafb0c903f5de2003a4516cf603ace02\"" Jul 6 23:56:11.614914 containerd[1804]: time="2025-07-06T23:56:11.614869320Z" level=info msg="StartContainer for \"d6bce789e2b7aa0e9097bda4eb3a2ae2aafb0c903f5de2003a4516cf603ace02\" returns successfully" Jul 6 23:56:11.663208 kubelet[3292]: I0706 23:56:11.663127 3292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dcrbc" podStartSLOduration=3.663071896 podStartE2EDuration="3.663071896s" podCreationTimestamp="2025-07-06 23:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:09.945481287 +0000 UTC m=+6.701813028" watchObservedRunningTime="2025-07-06 23:56:11.663071896 +0000 UTC m=+8.419403737" Jul 6 23:56:15.931354 kubelet[3292]: I0706 23:56:15.931258 3292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-ttm47" podStartSLOduration=4.897917376 podStartE2EDuration="6.930903785s" podCreationTimestamp="2025-07-06 23:56:09 +0000 UTC" firstStartedPulling="2025-07-06 23:56:09.484026652 +0000 UTC m=+6.240358393" lastFinishedPulling="2025-07-06 23:56:11.517013061 +0000 UTC m=+8.273344802" observedRunningTime="2025-07-06 23:56:11.960403594 +0000 UTC m=+8.716735335" watchObservedRunningTime="2025-07-06 23:56:15.930903785 +0000 UTC m=+12.687235526" Jul 6 23:56:17.986550 sudo[2343]: pam_unix(sudo:session): session closed for user root Jul 6 23:56:18.090812 sshd[2339]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:18.098845 systemd[1]: sshd@6-10.200.8.39:22-10.200.16.10:55258.service: Deactivated successfully. Jul 6 23:56:18.117276 systemd-logind[1768]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:56:18.119594 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:56:18.121787 systemd-logind[1768]: Removed session 9. Jul 6 23:56:22.277371 kubelet[3292]: I0706 23:56:22.277015 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/24b1324c-7db7-4092-9a9a-a1af3ad05621-typha-certs\") pod \"calico-typha-5bd595696b-kpmld\" (UID: \"24b1324c-7db7-4092-9a9a-a1af3ad05621\") " pod="calico-system/calico-typha-5bd595696b-kpmld" Jul 6 23:56:22.277371 kubelet[3292]: I0706 23:56:22.277100 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24b1324c-7db7-4092-9a9a-a1af3ad05621-tigera-ca-bundle\") pod \"calico-typha-5bd595696b-kpmld\" (UID: \"24b1324c-7db7-4092-9a9a-a1af3ad05621\") " pod="calico-system/calico-typha-5bd595696b-kpmld" Jul 6 23:56:22.277371 kubelet[3292]: I0706 23:56:22.277190 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgptv\" (UniqueName: \"kubernetes.io/projected/24b1324c-7db7-4092-9a9a-a1af3ad05621-kube-api-access-wgptv\") pod \"calico-typha-5bd595696b-kpmld\" (UID: \"24b1324c-7db7-4092-9a9a-a1af3ad05621\") " pod="calico-system/calico-typha-5bd595696b-kpmld" Jul 6 23:56:22.479427 kubelet[3292]: I0706 23:56:22.479368 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e8923710-bb35-4f7b-bea9-da7320eb2350-policysync\") pod \"calico-node-mtjlq\" (UID: \"e8923710-bb35-4f7b-bea9-da7320eb2350\") " pod="calico-system/calico-node-mtjlq" Jul 6 23:56:22.479427 kubelet[3292]: I0706 23:56:22.479432 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e8923710-bb35-4f7b-bea9-da7320eb2350-var-run-calico\") pod \"calico-node-mtjlq\" (UID: \"e8923710-bb35-4f7b-bea9-da7320eb2350\") " pod="calico-system/calico-node-mtjlq" Jul 6 23:56:22.479663 kubelet[3292]: I0706 23:56:22.479459 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fgmj\" (UniqueName: \"kubernetes.io/projected/e8923710-bb35-4f7b-bea9-da7320eb2350-kube-api-access-7fgmj\") pod \"calico-node-mtjlq\" (UID: \"e8923710-bb35-4f7b-bea9-da7320eb2350\") " pod="calico-system/calico-node-mtjlq" Jul 6 23:56:22.479663 kubelet[3292]: I0706 23:56:22.479482 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e8923710-bb35-4f7b-bea9-da7320eb2350-flexvol-driver-host\") pod \"calico-node-mtjlq\" (UID: \"e8923710-bb35-4f7b-bea9-da7320eb2350\") " pod="calico-system/calico-node-mtjlq" Jul 6 23:56:22.479663 kubelet[3292]: I0706 23:56:22.479505 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8923710-bb35-4f7b-bea9-da7320eb2350-lib-modules\") pod \"calico-node-mtjlq\" (UID: \"e8923710-bb35-4f7b-bea9-da7320eb2350\") " pod="calico-system/calico-node-mtjlq" Jul 6 23:56:22.479663 kubelet[3292]: I0706 23:56:22.479524 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e8923710-bb35-4f7b-bea9-da7320eb2350-node-certs\") pod \"calico-node-mtjlq\" (UID: \"e8923710-bb35-4f7b-bea9-da7320eb2350\") " pod="calico-system/calico-node-mtjlq" Jul 6 23:56:22.479663 kubelet[3292]: I0706 23:56:22.479544 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8923710-bb35-4f7b-bea9-da7320eb2350-tigera-ca-bundle\") pod \"calico-node-mtjlq\" (UID: \"e8923710-bb35-4f7b-bea9-da7320eb2350\") " pod="calico-system/calico-node-mtjlq" Jul 6 23:56:22.479851 kubelet[3292]: I0706 23:56:22.479588 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e8923710-bb35-4f7b-bea9-da7320eb2350-cni-log-dir\") pod \"calico-node-mtjlq\" (UID: \"e8923710-bb35-4f7b-bea9-da7320eb2350\") " pod="calico-system/calico-node-mtjlq" Jul 6 23:56:22.479851 kubelet[3292]: I0706 23:56:22.479613 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e8923710-bb35-4f7b-bea9-da7320eb2350-cni-net-dir\") pod \"calico-node-mtjlq\" (UID: \"e8923710-bb35-4f7b-bea9-da7320eb2350\") " pod="calico-system/calico-node-mtjlq" Jul 6 23:56:22.479851 kubelet[3292]: I0706 23:56:22.479637 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e8923710-bb35-4f7b-bea9-da7320eb2350-var-lib-calico\") pod \"calico-node-mtjlq\" (UID: \"e8923710-bb35-4f7b-bea9-da7320eb2350\") " pod="calico-system/calico-node-mtjlq" Jul 6 23:56:22.479851 kubelet[3292]: I0706 23:56:22.479658 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8923710-bb35-4f7b-bea9-da7320eb2350-xtables-lock\") pod \"calico-node-mtjlq\" (UID: \"e8923710-bb35-4f7b-bea9-da7320eb2350\") " pod="calico-system/calico-node-mtjlq" Jul 6 23:56:22.479851 kubelet[3292]: I0706 23:56:22.479679 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e8923710-bb35-4f7b-bea9-da7320eb2350-cni-bin-dir\") pod \"calico-node-mtjlq\" (UID: \"e8923710-bb35-4f7b-bea9-da7320eb2350\") " pod="calico-system/calico-node-mtjlq" Jul 6 23:56:22.506009 containerd[1804]: time="2025-07-06T23:56:22.505943229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bd595696b-kpmld,Uid:24b1324c-7db7-4092-9a9a-a1af3ad05621,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:22.561792 containerd[1804]: time="2025-07-06T23:56:22.561423063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:22.561792 containerd[1804]: time="2025-07-06T23:56:22.561654564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:22.561792 containerd[1804]: time="2025-07-06T23:56:22.561677264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:22.564287 containerd[1804]: time="2025-07-06T23:56:22.564148775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:22.592342 kubelet[3292]: E0706 23:56:22.589940 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.592342 kubelet[3292]: W0706 23:56:22.589974 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.592342 kubelet[3292]: E0706 23:56:22.590010 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.600391 kubelet[3292]: E0706 23:56:22.599995 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.600391 kubelet[3292]: W0706 23:56:22.600020 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.600391 kubelet[3292]: E0706 23:56:22.600044 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.601048 kubelet[3292]: E0706 23:56:22.601013 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.601290 kubelet[3292]: W0706 23:56:22.601231 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.601290 kubelet[3292]: E0706 23:56:22.601259 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.643880 containerd[1804]: time="2025-07-06T23:56:22.643841611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bd595696b-kpmld,Uid:24b1324c-7db7-4092-9a9a-a1af3ad05621,Namespace:calico-system,Attempt:0,} returns sandbox id \"2de95b59b1bfc9e7b5af2f5411a79ca211c530121e25945d4fba4dbf04319c43\"" Jul 6 23:56:22.647076 containerd[1804]: time="2025-07-06T23:56:22.647001324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 6 23:56:22.727255 kubelet[3292]: E0706 23:56:22.726650 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2vkj" podUID="d05fe5f5-a0d0-4818-841f-97f17bafd42f" Jul 6 23:56:22.741763 containerd[1804]: time="2025-07-06T23:56:22.741663724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mtjlq,Uid:e8923710-bb35-4f7b-bea9-da7320eb2350,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:22.780897 kubelet[3292]: E0706 23:56:22.780831 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.781423 kubelet[3292]: W0706 23:56:22.780958 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.781423 kubelet[3292]: E0706 23:56:22.780998 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.782010 kubelet[3292]: E0706 23:56:22.781826 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.782010 kubelet[3292]: W0706 23:56:22.781846 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.782010 kubelet[3292]: E0706 23:56:22.781869 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.782791 kubelet[3292]: E0706 23:56:22.782290 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.782791 kubelet[3292]: W0706 23:56:22.782304 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.782791 kubelet[3292]: E0706 23:56:22.782557 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.783422 kubelet[3292]: E0706 23:56:22.783229 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.783422 kubelet[3292]: W0706 23:56:22.783245 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.783422 kubelet[3292]: E0706 23:56:22.783263 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.783705 kubelet[3292]: E0706 23:56:22.783538 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.784234 kubelet[3292]: W0706 23:56:22.783843 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.784234 kubelet[3292]: E0706 23:56:22.783867 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.784456 kubelet[3292]: E0706 23:56:22.784312 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.784456 kubelet[3292]: W0706 23:56:22.784325 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.784456 kubelet[3292]: E0706 23:56:22.784340 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.785057 kubelet[3292]: E0706 23:56:22.784894 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.785057 kubelet[3292]: W0706 23:56:22.784908 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.785057 kubelet[3292]: E0706 23:56:22.784923 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.785456 kubelet[3292]: E0706 23:56:22.785184 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.785456 kubelet[3292]: W0706 23:56:22.785195 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.785456 kubelet[3292]: E0706 23:56:22.785208 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.785928 kubelet[3292]: E0706 23:56:22.785821 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.785928 kubelet[3292]: W0706 23:56:22.785853 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.785928 kubelet[3292]: E0706 23:56:22.785871 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.786666 kubelet[3292]: E0706 23:56:22.786336 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.786666 kubelet[3292]: W0706 23:56:22.786351 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.786666 kubelet[3292]: E0706 23:56:22.786366 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.787083 kubelet[3292]: E0706 23:56:22.787066 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.787170 kubelet[3292]: W0706 23:56:22.787083 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.787170 kubelet[3292]: E0706 23:56:22.787113 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.787724 kubelet[3292]: E0706 23:56:22.787342 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.787724 kubelet[3292]: W0706 23:56:22.787355 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.787724 kubelet[3292]: E0706 23:56:22.787368 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.787724 kubelet[3292]: E0706 23:56:22.787565 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.787724 kubelet[3292]: W0706 23:56:22.787575 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.787724 kubelet[3292]: E0706 23:56:22.787587 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.788082 kubelet[3292]: E0706 23:56:22.787863 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.788082 kubelet[3292]: W0706 23:56:22.787875 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.788082 kubelet[3292]: E0706 23:56:22.787889 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.788082 kubelet[3292]: E0706 23:56:22.788077 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.788313 kubelet[3292]: W0706 23:56:22.788087 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.788313 kubelet[3292]: E0706 23:56:22.788101 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.788828 kubelet[3292]: E0706 23:56:22.788434 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.788828 kubelet[3292]: W0706 23:56:22.788447 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.788828 kubelet[3292]: E0706 23:56:22.788460 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.789386 kubelet[3292]: E0706 23:56:22.789372 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.789498 kubelet[3292]: W0706 23:56:22.789486 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.789604 kubelet[3292]: E0706 23:56:22.789573 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.789977 kubelet[3292]: E0706 23:56:22.789860 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.789977 kubelet[3292]: W0706 23:56:22.789872 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.789977 kubelet[3292]: E0706 23:56:22.789898 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.790707 kubelet[3292]: E0706 23:56:22.790599 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.790707 kubelet[3292]: W0706 23:56:22.790613 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.790707 kubelet[3292]: E0706 23:56:22.790627 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.791089 kubelet[3292]: E0706 23:56:22.790988 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.791089 kubelet[3292]: W0706 23:56:22.791000 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.791089 kubelet[3292]: E0706 23:56:22.791014 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.792526 kubelet[3292]: E0706 23:56:22.792444 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.792526 kubelet[3292]: W0706 23:56:22.792473 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.792526 kubelet[3292]: E0706 23:56:22.792491 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.793625 kubelet[3292]: I0706 23:56:22.793488 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d05fe5f5-a0d0-4818-841f-97f17bafd42f-registration-dir\") pod \"csi-node-driver-q2vkj\" (UID: \"d05fe5f5-a0d0-4818-841f-97f17bafd42f\") " pod="calico-system/csi-node-driver-q2vkj" Jul 6 23:56:22.794193 kubelet[3292]: E0706 23:56:22.794173 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.794272 kubelet[3292]: W0706 23:56:22.794193 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.794272 kubelet[3292]: E0706 23:56:22.794218 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.795477 kubelet[3292]: E0706 23:56:22.795380 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.795477 kubelet[3292]: W0706 23:56:22.795399 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.795477 kubelet[3292]: E0706 23:56:22.795431 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.797245 kubelet[3292]: E0706 23:56:22.797219 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.797245 kubelet[3292]: W0706 23:56:22.797238 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.797518 kubelet[3292]: E0706 23:56:22.797269 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.797518 kubelet[3292]: I0706 23:56:22.797399 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d05fe5f5-a0d0-4818-841f-97f17bafd42f-kubelet-dir\") pod \"csi-node-driver-q2vkj\" (UID: \"d05fe5f5-a0d0-4818-841f-97f17bafd42f\") " pod="calico-system/csi-node-driver-q2vkj" Jul 6 23:56:22.799671 kubelet[3292]: E0706 23:56:22.799540 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.799671 kubelet[3292]: W0706 23:56:22.799558 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.799671 kubelet[3292]: E0706 23:56:22.799572 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.799671 kubelet[3292]: I0706 23:56:22.799597 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d05fe5f5-a0d0-4818-841f-97f17bafd42f-socket-dir\") pod \"csi-node-driver-q2vkj\" (UID: \"d05fe5f5-a0d0-4818-841f-97f17bafd42f\") " pod="calico-system/csi-node-driver-q2vkj" Jul 6 23:56:22.800086 kubelet[3292]: E0706 23:56:22.799800 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.800086 kubelet[3292]: W0706 23:56:22.799814 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.800086 kubelet[3292]: E0706 23:56:22.799827 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.800086 kubelet[3292]: I0706 23:56:22.799849 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt4pg\" (UniqueName: \"kubernetes.io/projected/d05fe5f5-a0d0-4818-841f-97f17bafd42f-kube-api-access-qt4pg\") pod \"csi-node-driver-q2vkj\" (UID: \"d05fe5f5-a0d0-4818-841f-97f17bafd42f\") " pod="calico-system/csi-node-driver-q2vkj" Jul 6 23:56:22.800674 kubelet[3292]: E0706 23:56:22.800436 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.800674 kubelet[3292]: W0706 23:56:22.800449 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.800674 kubelet[3292]: E0706 23:56:22.800550 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.800674 kubelet[3292]: I0706 23:56:22.800590 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d05fe5f5-a0d0-4818-841f-97f17bafd42f-varrun\") pod \"csi-node-driver-q2vkj\" (UID: \"d05fe5f5-a0d0-4818-841f-97f17bafd42f\") " pod="calico-system/csi-node-driver-q2vkj" Jul 6 23:56:22.801097 kubelet[3292]: E0706 23:56:22.800892 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.801097 kubelet[3292]: W0706 23:56:22.800907 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.801097 kubelet[3292]: E0706 23:56:22.801023 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.801538 kubelet[3292]: E0706 23:56:22.801525 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.801538 kubelet[3292]: W0706 23:56:22.801562 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.801873 kubelet[3292]: E0706 23:56:22.801648 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.802037 kubelet[3292]: E0706 23:56:22.802027 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.802166 kubelet[3292]: W0706 23:56:22.802102 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.802339 kubelet[3292]: E0706 23:56:22.802258 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.802711 kubelet[3292]: E0706 23:56:22.802669 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.802711 kubelet[3292]: W0706 23:56:22.802683 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.802711 kubelet[3292]: E0706 23:56:22.802707 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.803325 kubelet[3292]: E0706 23:56:22.802907 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.803325 kubelet[3292]: W0706 23:56:22.802920 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.803325 kubelet[3292]: E0706 23:56:22.802939 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.803853 kubelet[3292]: E0706 23:56:22.803672 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.803853 kubelet[3292]: W0706 23:56:22.803687 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.803853 kubelet[3292]: E0706 23:56:22.803702 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.804286 kubelet[3292]: E0706 23:56:22.804165 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.804286 kubelet[3292]: W0706 23:56:22.804197 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.804286 kubelet[3292]: E0706 23:56:22.804215 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.804690 kubelet[3292]: E0706 23:56:22.804586 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.804690 kubelet[3292]: W0706 23:56:22.804627 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.804690 kubelet[3292]: E0706 23:56:22.804654 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.806189 containerd[1804]: time="2025-07-06T23:56:22.805988695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:22.806189 containerd[1804]: time="2025-07-06T23:56:22.806032495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:22.806189 containerd[1804]: time="2025-07-06T23:56:22.806042496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:22.806189 containerd[1804]: time="2025-07-06T23:56:22.806147096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:22.849600 containerd[1804]: time="2025-07-06T23:56:22.848723876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mtjlq,Uid:e8923710-bb35-4f7b-bea9-da7320eb2350,Namespace:calico-system,Attempt:0,} returns sandbox id \"a397854aae93e2873a480d698676457189a6053bd1fb3024eb958b3c8f68563b\"" Jul 6 23:56:22.904070 kubelet[3292]: E0706 23:56:22.903781 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.904070 kubelet[3292]: W0706 23:56:22.903828 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.904070 kubelet[3292]: E0706 23:56:22.903856 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.905001 kubelet[3292]: E0706 23:56:22.904709 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.905001 kubelet[3292]: W0706 23:56:22.904730 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.905001 kubelet[3292]: E0706 23:56:22.904778 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.905542 kubelet[3292]: E0706 23:56:22.905427 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.905542 kubelet[3292]: W0706 23:56:22.905442 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.905542 kubelet[3292]: E0706 23:56:22.905484 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.906113 kubelet[3292]: E0706 23:56:22.906035 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.906113 kubelet[3292]: W0706 23:56:22.906052 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.906113 kubelet[3292]: E0706 23:56:22.906074 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.906681 kubelet[3292]: E0706 23:56:22.906621 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.906681 kubelet[3292]: W0706 23:56:22.906637 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.906681 kubelet[3292]: E0706 23:56:22.906651 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.907769 kubelet[3292]: E0706 23:56:22.907709 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.907769 kubelet[3292]: W0706 23:56:22.907724 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.907769 kubelet[3292]: E0706 23:56:22.907739 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.908574 kubelet[3292]: E0706 23:56:22.908560 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.908794 kubelet[3292]: W0706 23:56:22.908781 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.909059 kubelet[3292]: E0706 23:56:22.908929 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.909711 kubelet[3292]: E0706 23:56:22.909629 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.909711 kubelet[3292]: W0706 23:56:22.909643 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.909921 kubelet[3292]: E0706 23:56:22.909747 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.910319 kubelet[3292]: E0706 23:56:22.910188 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.910319 kubelet[3292]: W0706 23:56:22.910216 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.910319 kubelet[3292]: E0706 23:56:22.910297 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.910796 kubelet[3292]: E0706 23:56:22.910700 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.910796 kubelet[3292]: W0706 23:56:22.910714 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.911092 kubelet[3292]: E0706 23:56:22.910737 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.911347 kubelet[3292]: E0706 23:56:22.911255 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.911347 kubelet[3292]: W0706 23:56:22.911284 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.911347 kubelet[3292]: E0706 23:56:22.911308 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.912343 kubelet[3292]: E0706 23:56:22.911863 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.912343 kubelet[3292]: W0706 23:56:22.911879 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.912343 kubelet[3292]: E0706 23:56:22.912231 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.913500 kubelet[3292]: E0706 23:56:22.913038 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.913500 kubelet[3292]: W0706 23:56:22.913054 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.913500 kubelet[3292]: E0706 23:56:22.913078 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.914528 kubelet[3292]: E0706 23:56:22.914020 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.914528 kubelet[3292]: W0706 23:56:22.914035 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.914528 kubelet[3292]: E0706 23:56:22.914061 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.915309 kubelet[3292]: E0706 23:56:22.915194 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.915309 kubelet[3292]: W0706 23:56:22.915210 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.915309 kubelet[3292]: E0706 23:56:22.915238 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.917361 kubelet[3292]: E0706 23:56:22.917157 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.917361 kubelet[3292]: W0706 23:56:22.917174 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.917361 kubelet[3292]: E0706 23:56:22.917202 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.917657 kubelet[3292]: E0706 23:56:22.917577 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.917657 kubelet[3292]: W0706 23:56:22.917593 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.917994 kubelet[3292]: E0706 23:56:22.917954 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.917994 kubelet[3292]: E0706 23:56:22.917965 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.918343 kubelet[3292]: W0706 23:56:22.917969 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.918343 kubelet[3292]: E0706 23:56:22.918066 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.918670 kubelet[3292]: E0706 23:56:22.918573 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.918670 kubelet[3292]: W0706 23:56:22.918587 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.918914 kubelet[3292]: E0706 23:56:22.918799 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.920141 kubelet[3292]: E0706 23:56:22.919030 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.920141 kubelet[3292]: W0706 23:56:22.919043 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.920293 kubelet[3292]: E0706 23:56:22.920276 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.920658 kubelet[3292]: E0706 23:56:22.920507 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.920658 kubelet[3292]: W0706 23:56:22.920522 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.920832 kubelet[3292]: E0706 23:56:22.920790 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.921189 kubelet[3292]: E0706 23:56:22.921021 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.921189 kubelet[3292]: W0706 23:56:22.921033 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.921346 kubelet[3292]: E0706 23:56:22.921305 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.921632 kubelet[3292]: E0706 23:56:22.921560 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.921632 kubelet[3292]: W0706 23:56:22.921575 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.922458 kubelet[3292]: E0706 23:56:22.922151 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.923165 kubelet[3292]: E0706 23:56:22.922714 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.923280 kubelet[3292]: W0706 23:56:22.923264 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.923502 kubelet[3292]: E0706 23:56:22.923348 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.923717 kubelet[3292]: E0706 23:56:22.923704 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.923803 kubelet[3292]: W0706 23:56:22.923790 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.924200 kubelet[3292]: E0706 23:56:22.924160 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:22.948646 kubelet[3292]: E0706 23:56:22.948530 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:22.948646 kubelet[3292]: W0706 23:56:22.948574 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:22.948646 kubelet[3292]: E0706 23:56:22.948599 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:23.397389 systemd[1]: run-containerd-runc-k8s.io-2de95b59b1bfc9e7b5af2f5411a79ca211c530121e25945d4fba4dbf04319c43-runc.DnleuH.mount: Deactivated successfully. Jul 6 23:56:24.887751 kubelet[3292]: E0706 23:56:24.887683 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2vkj" podUID="d05fe5f5-a0d0-4818-841f-97f17bafd42f" Jul 6 23:56:25.180513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3028802470.mount: Deactivated successfully. Jul 6 23:56:26.483268 containerd[1804]: time="2025-07-06T23:56:26.483198570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:26.486563 containerd[1804]: time="2025-07-06T23:56:26.486209087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 6 23:56:26.491881 containerd[1804]: time="2025-07-06T23:56:26.491827917Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:26.499276 containerd[1804]: time="2025-07-06T23:56:26.499209757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:26.500358 containerd[1804]: time="2025-07-06T23:56:26.499902160Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 3.852858735s" Jul 6 23:56:26.500358 containerd[1804]: time="2025-07-06T23:56:26.499939561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 6 23:56:26.501433 containerd[1804]: time="2025-07-06T23:56:26.501410869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 6 23:56:26.522627 containerd[1804]: time="2025-07-06T23:56:26.522300381Z" level=info msg="CreateContainer within sandbox \"2de95b59b1bfc9e7b5af2f5411a79ca211c530121e25945d4fba4dbf04319c43\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 6 23:56:26.579746 containerd[1804]: time="2025-07-06T23:56:26.579695091Z" level=info msg="CreateContainer within sandbox \"2de95b59b1bfc9e7b5af2f5411a79ca211c530121e25945d4fba4dbf04319c43\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"51c17a820adce96a42ce9315ba40f1a92d73c69a90c032f3d630151e4ee6f8d2\"" Jul 6 23:56:26.580336 containerd[1804]: time="2025-07-06T23:56:26.580300394Z" level=info msg="StartContainer for \"51c17a820adce96a42ce9315ba40f1a92d73c69a90c032f3d630151e4ee6f8d2\"" Jul 6 23:56:26.668211 containerd[1804]: time="2025-07-06T23:56:26.668080268Z" level=info msg="StartContainer for \"51c17a820adce96a42ce9315ba40f1a92d73c69a90c032f3d630151e4ee6f8d2\" returns successfully" Jul 6 23:56:26.887720 kubelet[3292]: E0706 23:56:26.887667 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2vkj" podUID="d05fe5f5-a0d0-4818-841f-97f17bafd42f" Jul 6 23:56:27.022856 kubelet[3292]: E0706 23:56:27.022799 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.022856 kubelet[3292]: W0706 23:56:27.022845 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.022856 kubelet[3292]: E0706 23:56:27.022873 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.023892 kubelet[3292]: E0706 23:56:27.023738 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.023892 kubelet[3292]: W0706 23:56:27.023759 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.023892 kubelet[3292]: E0706 23:56:27.023778 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.024812 kubelet[3292]: E0706 23:56:27.024736 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.024812 kubelet[3292]: W0706 23:56:27.024758 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.025161 kubelet[3292]: E0706 23:56:27.025111 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.027268 kubelet[3292]: E0706 23:56:27.027247 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.027268 kubelet[3292]: W0706 23:56:27.027267 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.027402 kubelet[3292]: E0706 23:56:27.027283 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.029137 kubelet[3292]: E0706 23:56:27.027564 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.029137 kubelet[3292]: W0706 23:56:27.027579 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.029137 kubelet[3292]: E0706 23:56:27.027593 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.029330 kubelet[3292]: E0706 23:56:27.029321 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.029381 kubelet[3292]: W0706 23:56:27.029335 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.029381 kubelet[3292]: E0706 23:56:27.029349 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.031170 kubelet[3292]: E0706 23:56:27.029569 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.031170 kubelet[3292]: W0706 23:56:27.029581 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.031170 kubelet[3292]: E0706 23:56:27.029594 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.031561 kubelet[3292]: E0706 23:56:27.031412 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.031561 kubelet[3292]: W0706 23:56:27.031428 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.031561 kubelet[3292]: E0706 23:56:27.031443 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.031909 kubelet[3292]: E0706 23:56:27.031784 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.031909 kubelet[3292]: W0706 23:56:27.031800 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.031909 kubelet[3292]: E0706 23:56:27.031815 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.032259 kubelet[3292]: E0706 23:56:27.032111 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.032259 kubelet[3292]: W0706 23:56:27.032142 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.032259 kubelet[3292]: E0706 23:56:27.032157 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.034598 kubelet[3292]: E0706 23:56:27.034344 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.034598 kubelet[3292]: W0706 23:56:27.034363 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.034598 kubelet[3292]: E0706 23:56:27.034377 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.035593 kubelet[3292]: E0706 23:56:27.035205 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.035593 kubelet[3292]: W0706 23:56:27.035222 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.035593 kubelet[3292]: E0706 23:56:27.035252 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.036428 kubelet[3292]: E0706 23:56:27.036186 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.036428 kubelet[3292]: W0706 23:56:27.036200 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.036428 kubelet[3292]: E0706 23:56:27.036223 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.038153 kubelet[3292]: E0706 23:56:27.037508 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.038153 kubelet[3292]: W0706 23:56:27.037524 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.038153 kubelet[3292]: E0706 23:56:27.037540 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.038153 kubelet[3292]: E0706 23:56:27.037956 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.040458 kubelet[3292]: W0706 23:56:27.037970 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.040458 kubelet[3292]: E0706 23:56:27.040209 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.041223 kubelet[3292]: E0706 23:56:27.041203 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.041223 kubelet[3292]: W0706 23:56:27.041222 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.041341 kubelet[3292]: E0706 23:56:27.041237 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.041540 kubelet[3292]: E0706 23:56:27.041522 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.041540 kubelet[3292]: W0706 23:56:27.041539 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.041649 kubelet[3292]: E0706 23:56:27.041568 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.043295 kubelet[3292]: E0706 23:56:27.043276 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.043295 kubelet[3292]: W0706 23:56:27.043293 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.043414 kubelet[3292]: E0706 23:56:27.043319 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.043601 kubelet[3292]: E0706 23:56:27.043584 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.043601 kubelet[3292]: W0706 23:56:27.043601 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.043705 kubelet[3292]: E0706 23:56:27.043685 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.045140 kubelet[3292]: E0706 23:56:27.043843 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.045140 kubelet[3292]: W0706 23:56:27.043854 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.045140 kubelet[3292]: E0706 23:56:27.043939 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.045140 kubelet[3292]: E0706 23:56:27.044061 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.045140 kubelet[3292]: W0706 23:56:27.044070 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.045140 kubelet[3292]: E0706 23:56:27.044140 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.045140 kubelet[3292]: E0706 23:56:27.044332 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.045140 kubelet[3292]: W0706 23:56:27.044343 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.045140 kubelet[3292]: E0706 23:56:27.044369 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.045555 kubelet[3292]: E0706 23:56:27.045325 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.045555 kubelet[3292]: W0706 23:56:27.045340 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.045555 kubelet[3292]: E0706 23:56:27.045368 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.045683 kubelet[3292]: E0706 23:56:27.045625 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.045683 kubelet[3292]: W0706 23:56:27.045636 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.045764 kubelet[3292]: E0706 23:56:27.045719 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.047175 kubelet[3292]: E0706 23:56:27.046466 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.047175 kubelet[3292]: W0706 23:56:27.046482 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.047175 kubelet[3292]: E0706 23:56:27.046737 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.047175 kubelet[3292]: E0706 23:56:27.046967 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.047175 kubelet[3292]: W0706 23:56:27.046978 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.047404 kubelet[3292]: E0706 23:56:27.047307 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.048027 kubelet[3292]: E0706 23:56:27.047622 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.048027 kubelet[3292]: W0706 23:56:27.047635 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.048027 kubelet[3292]: E0706 23:56:27.047741 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.048347 kubelet[3292]: E0706 23:56:27.048258 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.048347 kubelet[3292]: W0706 23:56:27.048272 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.048347 kubelet[3292]: E0706 23:56:27.048297 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.049229 kubelet[3292]: E0706 23:56:27.048871 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.049229 kubelet[3292]: W0706 23:56:27.048885 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.049229 kubelet[3292]: E0706 23:56:27.049058 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.051970 kubelet[3292]: E0706 23:56:27.051948 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.051970 kubelet[3292]: W0706 23:56:27.051967 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.052087 kubelet[3292]: E0706 23:56:27.051983 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.052237 kubelet[3292]: E0706 23:56:27.052222 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.052303 kubelet[3292]: W0706 23:56:27.052237 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.052303 kubelet[3292]: E0706 23:56:27.052254 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.054415 kubelet[3292]: E0706 23:56:27.054393 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.054494 kubelet[3292]: W0706 23:56:27.054418 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.054494 kubelet[3292]: E0706 23:56:27.054433 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.054663 kubelet[3292]: E0706 23:56:27.054647 3292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:27.054724 kubelet[3292]: W0706 23:56:27.054664 3292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:27.054724 kubelet[3292]: E0706 23:56:27.054677 3292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:27.877042 containerd[1804]: time="2025-07-06T23:56:27.876914988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:27.880106 containerd[1804]: time="2025-07-06T23:56:27.880054405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 6 23:56:27.884415 containerd[1804]: time="2025-07-06T23:56:27.884324328Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:27.890163 containerd[1804]: time="2025-07-06T23:56:27.890094159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:27.890904 containerd[1804]: time="2025-07-06T23:56:27.890736763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.389197694s" Jul 6 23:56:27.890904 containerd[1804]: time="2025-07-06T23:56:27.890789863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 6 23:56:27.893172 containerd[1804]: time="2025-07-06T23:56:27.893140276Z" level=info msg="CreateContainer within sandbox \"a397854aae93e2873a480d698676457189a6053bd1fb3024eb958b3c8f68563b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 6 23:56:27.927542 containerd[1804]: time="2025-07-06T23:56:27.927441261Z" level=info msg="CreateContainer within sandbox \"a397854aae93e2873a480d698676457189a6053bd1fb3024eb958b3c8f68563b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7a16b8fb5fa01d5eb9f8f06e8a93335279bde13563873bb8af9b50a3a967a6bb\"" Jul 6 23:56:27.929821 containerd[1804]: time="2025-07-06T23:56:27.928407166Z" level=info msg="StartContainer for \"7a16b8fb5fa01d5eb9f8f06e8a93335279bde13563873bb8af9b50a3a967a6bb\"" Jul 6 23:56:27.993139 kubelet[3292]: I0706 23:56:27.990318 3292 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:56:28.006828 containerd[1804]: time="2025-07-06T23:56:28.004560177Z" level=info msg="StartContainer for \"7a16b8fb5fa01d5eb9f8f06e8a93335279bde13563873bb8af9b50a3a967a6bb\" returns successfully" Jul 6 23:56:28.044976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a16b8fb5fa01d5eb9f8f06e8a93335279bde13563873bb8af9b50a3a967a6bb-rootfs.mount: Deactivated successfully. Jul 6 23:56:28.887698 kubelet[3292]: E0706 23:56:28.887631 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2vkj" podUID="d05fe5f5-a0d0-4818-841f-97f17bafd42f" Jul 6 23:56:29.517151 kubelet[3292]: I0706 23:56:29.010749 3292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5bd595696b-kpmld" podStartSLOduration=3.155267556 podStartE2EDuration="7.010726104s" podCreationTimestamp="2025-07-06 23:56:22 +0000 UTC" firstStartedPulling="2025-07-06 23:56:22.645478418 +0000 UTC m=+19.401810459" lastFinishedPulling="2025-07-06 23:56:26.500937266 +0000 UTC m=+23.257269007" observedRunningTime="2025-07-06 23:56:27.03744106 +0000 UTC m=+23.793772901" watchObservedRunningTime="2025-07-06 23:56:29.010726104 +0000 UTC m=+25.767057845" Jul 6 23:56:29.578549 containerd[1804]: time="2025-07-06T23:56:29.578476967Z" level=info msg="shim disconnected" id=7a16b8fb5fa01d5eb9f8f06e8a93335279bde13563873bb8af9b50a3a967a6bb namespace=k8s.io Jul 6 23:56:29.578549 containerd[1804]: time="2025-07-06T23:56:29.578540667Z" level=warning msg="cleaning up after shim disconnected" id=7a16b8fb5fa01d5eb9f8f06e8a93335279bde13563873bb8af9b50a3a967a6bb namespace=k8s.io Jul 6 23:56:29.578549 containerd[1804]: time="2025-07-06T23:56:29.578553567Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:56:30.000804 containerd[1804]: time="2025-07-06T23:56:29.999639539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 6 23:56:30.887663 kubelet[3292]: E0706 23:56:30.887597 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2vkj" podUID="d05fe5f5-a0d0-4818-841f-97f17bafd42f" Jul 6 23:56:32.888937 kubelet[3292]: E0706 23:56:32.888193 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2vkj" podUID="d05fe5f5-a0d0-4818-841f-97f17bafd42f" Jul 6 23:56:34.086209 containerd[1804]: time="2025-07-06T23:56:34.086160690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:34.088597 containerd[1804]: time="2025-07-06T23:56:34.088542803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 6 23:56:34.094100 containerd[1804]: time="2025-07-06T23:56:34.093947032Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:34.098707 containerd[1804]: time="2025-07-06T23:56:34.098639957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:34.099978 containerd[1804]: time="2025-07-06T23:56:34.099353561Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.099666522s" Jul 6 23:56:34.099978 containerd[1804]: time="2025-07-06T23:56:34.099389761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 6 23:56:34.102000 containerd[1804]: time="2025-07-06T23:56:34.101942875Z" level=info msg="CreateContainer within sandbox \"a397854aae93e2873a480d698676457189a6053bd1fb3024eb958b3c8f68563b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:56:34.142675 containerd[1804]: time="2025-07-06T23:56:34.142625095Z" level=info msg="CreateContainer within sandbox \"a397854aae93e2873a480d698676457189a6053bd1fb3024eb958b3c8f68563b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"befc2c2b3094b57f42bd4e2d1b8afb554487871e3a56bde68ef8abbaefda65b5\"" Jul 6 23:56:34.143280 containerd[1804]: time="2025-07-06T23:56:34.143237998Z" level=info msg="StartContainer for \"befc2c2b3094b57f42bd4e2d1b8afb554487871e3a56bde68ef8abbaefda65b5\"" Jul 6 23:56:34.209987 containerd[1804]: time="2025-07-06T23:56:34.209832257Z" level=info msg="StartContainer for \"befc2c2b3094b57f42bd4e2d1b8afb554487871e3a56bde68ef8abbaefda65b5\" returns successfully" Jul 6 23:56:34.888515 kubelet[3292]: E0706 23:56:34.888386 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2vkj" podUID="d05fe5f5-a0d0-4818-841f-97f17bafd42f" Jul 6 23:56:35.893855 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-befc2c2b3094b57f42bd4e2d1b8afb554487871e3a56bde68ef8abbaefda65b5-rootfs.mount: Deactivated successfully. Jul 6 23:56:35.929687 kubelet[3292]: I0706 23:56:35.929594 3292 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 6 23:56:36.107248 kubelet[3292]: I0706 23:56:36.107066 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/213a78c2-eb8b-4530-9913-02f60715b4f4-config-volume\") pod \"coredns-7c65d6cfc9-pjxzj\" (UID: \"213a78c2-eb8b-4530-9913-02f60715b4f4\") " pod="kube-system/coredns-7c65d6cfc9-pjxzj" Jul 6 23:56:36.107248 kubelet[3292]: I0706 23:56:36.107132 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljmns\" (UniqueName: \"kubernetes.io/projected/0b182eb4-112d-494c-ad49-a4d43ae37b16-kube-api-access-ljmns\") pod \"calico-kube-controllers-79f7f6c588-x5rcf\" (UID: \"0b182eb4-112d-494c-ad49-a4d43ae37b16\") " pod="calico-system/calico-kube-controllers-79f7f6c588-x5rcf" Jul 6 23:56:36.107248 kubelet[3292]: I0706 23:56:36.107196 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/716cfedd-0158-4dcb-9ac1-1fdba73e9c13-calico-apiserver-certs\") pod \"calico-apiserver-68646bbcb-2gm8s\" (UID: \"716cfedd-0158-4dcb-9ac1-1fdba73e9c13\") " pod="calico-apiserver/calico-apiserver-68646bbcb-2gm8s" Jul 6 23:56:36.107248 kubelet[3292]: I0706 23:56:36.107224 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84b426be-d8f6-4a60-8c2e-1c346fd9da79-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-xhs8b\" (UID: \"84b426be-d8f6-4a60-8c2e-1c346fd9da79\") " pod="calico-system/goldmane-58fd7646b9-xhs8b" Jul 6 23:56:36.107248 kubelet[3292]: I0706 23:56:36.107248 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/84b426be-d8f6-4a60-8c2e-1c346fd9da79-goldmane-key-pair\") pod \"goldmane-58fd7646b9-xhs8b\" (UID: \"84b426be-d8f6-4a60-8c2e-1c346fd9da79\") " pod="calico-system/goldmane-58fd7646b9-xhs8b" Jul 6 23:56:36.108301 kubelet[3292]: I0706 23:56:36.107275 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dccsl\" (UniqueName: \"kubernetes.io/projected/716cfedd-0158-4dcb-9ac1-1fdba73e9c13-kube-api-access-dccsl\") pod \"calico-apiserver-68646bbcb-2gm8s\" (UID: \"716cfedd-0158-4dcb-9ac1-1fdba73e9c13\") " pod="calico-apiserver/calico-apiserver-68646bbcb-2gm8s" Jul 6 23:56:36.108301 kubelet[3292]: I0706 23:56:36.107295 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt25r\" (UniqueName: \"kubernetes.io/projected/213a78c2-eb8b-4530-9913-02f60715b4f4-kube-api-access-mt25r\") pod \"coredns-7c65d6cfc9-pjxzj\" (UID: \"213a78c2-eb8b-4530-9913-02f60715b4f4\") " pod="kube-system/coredns-7c65d6cfc9-pjxzj" Jul 6 23:56:36.108301 kubelet[3292]: I0706 23:56:36.107318 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l2z4\" (UniqueName: \"kubernetes.io/projected/f805d877-66eb-46da-b324-d84c54cb40ca-kube-api-access-5l2z4\") pod \"calico-apiserver-68646bbcb-rvxtr\" (UID: \"f805d877-66eb-46da-b324-d84c54cb40ca\") " pod="calico-apiserver/calico-apiserver-68646bbcb-rvxtr" Jul 6 23:56:36.108301 kubelet[3292]: I0706 23:56:36.107362 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq957\" (UniqueName: \"kubernetes.io/projected/84b426be-d8f6-4a60-8c2e-1c346fd9da79-kube-api-access-qq957\") pod \"goldmane-58fd7646b9-xhs8b\" (UID: \"84b426be-d8f6-4a60-8c2e-1c346fd9da79\") " pod="calico-system/goldmane-58fd7646b9-xhs8b" Jul 6 23:56:36.108301 kubelet[3292]: I0706 23:56:36.107399 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30e4ddaa-52d3-4520-b020-bfadbf9c8b21-whisker-backend-key-pair\") pod \"whisker-76948fb6d9-8kpvd\" (UID: \"30e4ddaa-52d3-4520-b020-bfadbf9c8b21\") " pod="calico-system/whisker-76948fb6d9-8kpvd" Jul 6 23:56:36.108432 kubelet[3292]: I0706 23:56:36.107424 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b182eb4-112d-494c-ad49-a4d43ae37b16-tigera-ca-bundle\") pod \"calico-kube-controllers-79f7f6c588-x5rcf\" (UID: \"0b182eb4-112d-494c-ad49-a4d43ae37b16\") " pod="calico-system/calico-kube-controllers-79f7f6c588-x5rcf" Jul 6 23:56:36.108432 kubelet[3292]: I0706 23:56:36.107443 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b426be-d8f6-4a60-8c2e-1c346fd9da79-config\") pod \"goldmane-58fd7646b9-xhs8b\" (UID: \"84b426be-d8f6-4a60-8c2e-1c346fd9da79\") " pod="calico-system/goldmane-58fd7646b9-xhs8b" Jul 6 23:56:36.108432 kubelet[3292]: I0706 23:56:36.107477 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58666ff6-c819-4067-ae41-b5a4a7ab70fc-config-volume\") pod \"coredns-7c65d6cfc9-nh8m8\" (UID: \"58666ff6-c819-4067-ae41-b5a4a7ab70fc\") " pod="kube-system/coredns-7c65d6cfc9-nh8m8" Jul 6 23:56:36.108432 kubelet[3292]: I0706 23:56:36.107499 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30e4ddaa-52d3-4520-b020-bfadbf9c8b21-whisker-ca-bundle\") pod \"whisker-76948fb6d9-8kpvd\" (UID: \"30e4ddaa-52d3-4520-b020-bfadbf9c8b21\") " pod="calico-system/whisker-76948fb6d9-8kpvd" Jul 6 23:56:36.108432 kubelet[3292]: I0706 23:56:36.107537 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjflp\" (UniqueName: \"kubernetes.io/projected/30e4ddaa-52d3-4520-b020-bfadbf9c8b21-kube-api-access-sjflp\") pod \"whisker-76948fb6d9-8kpvd\" (UID: \"30e4ddaa-52d3-4520-b020-bfadbf9c8b21\") " pod="calico-system/whisker-76948fb6d9-8kpvd" Jul 6 23:56:36.108556 kubelet[3292]: I0706 23:56:36.107564 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f805d877-66eb-46da-b324-d84c54cb40ca-calico-apiserver-certs\") pod \"calico-apiserver-68646bbcb-rvxtr\" (UID: \"f805d877-66eb-46da-b324-d84c54cb40ca\") " pod="calico-apiserver/calico-apiserver-68646bbcb-rvxtr" Jul 6 23:56:36.108556 kubelet[3292]: I0706 23:56:36.107590 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbt5d\" (UniqueName: \"kubernetes.io/projected/58666ff6-c819-4067-ae41-b5a4a7ab70fc-kube-api-access-kbt5d\") pod \"coredns-7c65d6cfc9-nh8m8\" (UID: \"58666ff6-c819-4067-ae41-b5a4a7ab70fc\") " pod="kube-system/coredns-7c65d6cfc9-nh8m8" Jul 6 23:56:37.072575 containerd[1804]: time="2025-07-06T23:56:37.071036499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q2vkj,Uid:d05fe5f5-a0d0-4818-841f-97f17bafd42f,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:37.120879 containerd[1804]: time="2025-07-06T23:56:37.120811468Z" level=info msg="shim disconnected" id=befc2c2b3094b57f42bd4e2d1b8afb554487871e3a56bde68ef8abbaefda65b5 namespace=k8s.io Jul 6 23:56:37.121221 containerd[1804]: time="2025-07-06T23:56:37.121194170Z" level=warning msg="cleaning up after shim disconnected" id=befc2c2b3094b57f42bd4e2d1b8afb554487871e3a56bde68ef8abbaefda65b5 namespace=k8s.io Jul 6 23:56:37.121382 containerd[1804]: time="2025-07-06T23:56:37.121340871Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:56:37.188693 containerd[1804]: time="2025-07-06T23:56:37.188640634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pjxzj,Uid:213a78c2-eb8b-4530-9913-02f60715b4f4,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:37.202633 containerd[1804]: time="2025-07-06T23:56:37.202480909Z" level=error msg="Failed to destroy network for sandbox \"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.203340 containerd[1804]: time="2025-07-06T23:56:37.203096912Z" level=error msg="encountered an error cleaning up failed sandbox \"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.203340 containerd[1804]: time="2025-07-06T23:56:37.203207912Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q2vkj,Uid:d05fe5f5-a0d0-4818-841f-97f17bafd42f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.203768 containerd[1804]: time="2025-07-06T23:56:37.203738215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68646bbcb-rvxtr,Uid:f805d877-66eb-46da-b324-d84c54cb40ca,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:56:37.204151 containerd[1804]: time="2025-07-06T23:56:37.203964617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-xhs8b,Uid:84b426be-d8f6-4a60-8c2e-1c346fd9da79,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:37.204375 kubelet[3292]: E0706 23:56:37.204327 3292 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.205023 kubelet[3292]: E0706 23:56:37.204429 3292 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q2vkj" Jul 6 23:56:37.205023 kubelet[3292]: E0706 23:56:37.204479 3292 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q2vkj" Jul 6 23:56:37.205023 kubelet[3292]: E0706 23:56:37.204546 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q2vkj_calico-system(d05fe5f5-a0d0-4818-841f-97f17bafd42f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q2vkj_calico-system(d05fe5f5-a0d0-4818-841f-97f17bafd42f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q2vkj" podUID="d05fe5f5-a0d0-4818-841f-97f17bafd42f" Jul 6 23:56:37.205241 containerd[1804]: time="2025-07-06T23:56:37.204215518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79f7f6c588-x5rcf,Uid:0b182eb4-112d-494c-ad49-a4d43ae37b16,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:37.205241 containerd[1804]: time="2025-07-06T23:56:37.204548820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nh8m8,Uid:58666ff6-c819-4067-ae41-b5a4a7ab70fc,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:37.207012 containerd[1804]: time="2025-07-06T23:56:37.206278829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76948fb6d9-8kpvd,Uid:30e4ddaa-52d3-4520-b020-bfadbf9c8b21,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:37.207012 containerd[1804]: time="2025-07-06T23:56:37.206952033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68646bbcb-2gm8s,Uid:716cfedd-0158-4dcb-9ac1-1fdba73e9c13,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:56:37.327836 containerd[1804]: time="2025-07-06T23:56:37.327702684Z" level=error msg="Failed to destroy network for sandbox \"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.328821 containerd[1804]: time="2025-07-06T23:56:37.328511689Z" level=error msg="encountered an error cleaning up failed sandbox \"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.328821 containerd[1804]: time="2025-07-06T23:56:37.328641489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pjxzj,Uid:213a78c2-eb8b-4530-9913-02f60715b4f4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.329038 kubelet[3292]: E0706 23:56:37.328964 3292 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.329262 kubelet[3292]: E0706 23:56:37.329043 3292 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-pjxzj" Jul 6 23:56:37.329262 kubelet[3292]: E0706 23:56:37.329070 3292 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-pjxzj" Jul 6 23:56:37.329706 kubelet[3292]: E0706 23:56:37.329434 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-pjxzj_kube-system(213a78c2-eb8b-4530-9913-02f60715b4f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-pjxzj_kube-system(213a78c2-eb8b-4530-9913-02f60715b4f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-pjxzj" podUID="213a78c2-eb8b-4530-9913-02f60715b4f4" Jul 6 23:56:37.526302 containerd[1804]: time="2025-07-06T23:56:37.526237656Z" level=error msg="Failed to destroy network for sandbox \"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.528199 containerd[1804]: time="2025-07-06T23:56:37.527244561Z" level=error msg="encountered an error cleaning up failed sandbox \"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.528199 containerd[1804]: time="2025-07-06T23:56:37.527325662Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68646bbcb-rvxtr,Uid:f805d877-66eb-46da-b324-d84c54cb40ca,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.528419 kubelet[3292]: E0706 23:56:37.527578 3292 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.528419 kubelet[3292]: E0706 23:56:37.527648 3292 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68646bbcb-rvxtr" Jul 6 23:56:37.528419 kubelet[3292]: E0706 23:56:37.527681 3292 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68646bbcb-rvxtr" Jul 6 23:56:37.528572 kubelet[3292]: E0706 23:56:37.527741 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68646bbcb-rvxtr_calico-apiserver(f805d877-66eb-46da-b324-d84c54cb40ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68646bbcb-rvxtr_calico-apiserver(f805d877-66eb-46da-b324-d84c54cb40ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68646bbcb-rvxtr" podUID="f805d877-66eb-46da-b324-d84c54cb40ca" Jul 6 23:56:37.541145 containerd[1804]: time="2025-07-06T23:56:37.540204331Z" level=error msg="Failed to destroy network for sandbox \"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.541722 containerd[1804]: time="2025-07-06T23:56:37.541677839Z" level=error msg="encountered an error cleaning up failed sandbox \"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.541827 containerd[1804]: time="2025-07-06T23:56:37.541750140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-xhs8b,Uid:84b426be-d8f6-4a60-8c2e-1c346fd9da79,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.542009 kubelet[3292]: E0706 23:56:37.541966 3292 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.542091 kubelet[3292]: E0706 23:56:37.542046 3292 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-xhs8b" Jul 6 23:56:37.542091 kubelet[3292]: E0706 23:56:37.542073 3292 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-xhs8b" Jul 6 23:56:37.542236 kubelet[3292]: E0706 23:56:37.542166 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-xhs8b_calico-system(84b426be-d8f6-4a60-8c2e-1c346fd9da79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-xhs8b_calico-system(84b426be-d8f6-4a60-8c2e-1c346fd9da79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-xhs8b" podUID="84b426be-d8f6-4a60-8c2e-1c346fd9da79" Jul 6 23:56:37.582433 containerd[1804]: time="2025-07-06T23:56:37.582286858Z" level=error msg="Failed to destroy network for sandbox \"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.583533 containerd[1804]: time="2025-07-06T23:56:37.583481165Z" level=error msg="encountered an error cleaning up failed sandbox \"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.583718 containerd[1804]: time="2025-07-06T23:56:37.583694166Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nh8m8,Uid:58666ff6-c819-4067-ae41-b5a4a7ab70fc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.584104 kubelet[3292]: E0706 23:56:37.584055 3292 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.584233 kubelet[3292]: E0706 23:56:37.584150 3292 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-nh8m8" Jul 6 23:56:37.584233 kubelet[3292]: E0706 23:56:37.584180 3292 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-nh8m8" Jul 6 23:56:37.584315 kubelet[3292]: E0706 23:56:37.584238 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-nh8m8_kube-system(58666ff6-c819-4067-ae41-b5a4a7ab70fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-nh8m8_kube-system(58666ff6-c819-4067-ae41-b5a4a7ab70fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-nh8m8" podUID="58666ff6-c819-4067-ae41-b5a4a7ab70fc" Jul 6 23:56:37.586602 containerd[1804]: time="2025-07-06T23:56:37.586381580Z" level=error msg="Failed to destroy network for sandbox \"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.587085 containerd[1804]: time="2025-07-06T23:56:37.586876783Z" level=error msg="encountered an error cleaning up failed sandbox \"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.587231 containerd[1804]: time="2025-07-06T23:56:37.587058384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79f7f6c588-x5rcf,Uid:0b182eb4-112d-494c-ad49-a4d43ae37b16,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.587488 kubelet[3292]: E0706 23:56:37.587455 3292 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.587577 kubelet[3292]: E0706 23:56:37.587517 3292 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79f7f6c588-x5rcf" Jul 6 23:56:37.587577 kubelet[3292]: E0706 23:56:37.587545 3292 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79f7f6c588-x5rcf" Jul 6 23:56:37.587675 kubelet[3292]: E0706 23:56:37.587597 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79f7f6c588-x5rcf_calico-system(0b182eb4-112d-494c-ad49-a4d43ae37b16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79f7f6c588-x5rcf_calico-system(0b182eb4-112d-494c-ad49-a4d43ae37b16)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79f7f6c588-x5rcf" podUID="0b182eb4-112d-494c-ad49-a4d43ae37b16" Jul 6 23:56:37.589567 containerd[1804]: time="2025-07-06T23:56:37.589369897Z" level=error msg="Failed to destroy network for sandbox \"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.589892 containerd[1804]: time="2025-07-06T23:56:37.589830499Z" level=error msg="encountered an error cleaning up failed sandbox \"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.590037 containerd[1804]: time="2025-07-06T23:56:37.589986800Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68646bbcb-2gm8s,Uid:716cfedd-0158-4dcb-9ac1-1fdba73e9c13,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.590616 kubelet[3292]: E0706 23:56:37.590362 3292 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.590616 kubelet[3292]: E0706 23:56:37.590476 3292 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68646bbcb-2gm8s" Jul 6 23:56:37.590616 kubelet[3292]: E0706 23:56:37.590504 3292 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68646bbcb-2gm8s" Jul 6 23:56:37.591569 kubelet[3292]: E0706 23:56:37.590560 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68646bbcb-2gm8s_calico-apiserver(716cfedd-0158-4dcb-9ac1-1fdba73e9c13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68646bbcb-2gm8s_calico-apiserver(716cfedd-0158-4dcb-9ac1-1fdba73e9c13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68646bbcb-2gm8s" podUID="716cfedd-0158-4dcb-9ac1-1fdba73e9c13" Jul 6 23:56:37.595067 containerd[1804]: time="2025-07-06T23:56:37.595023227Z" level=error msg="Failed to destroy network for sandbox \"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.595561 containerd[1804]: time="2025-07-06T23:56:37.595323629Z" level=error msg="encountered an error cleaning up failed sandbox \"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.595561 containerd[1804]: time="2025-07-06T23:56:37.595473330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76948fb6d9-8kpvd,Uid:30e4ddaa-52d3-4520-b020-bfadbf9c8b21,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.595789 kubelet[3292]: E0706 23:56:37.595655 3292 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:37.595789 kubelet[3292]: E0706 23:56:37.595702 3292 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-76948fb6d9-8kpvd" Jul 6 23:56:37.595789 kubelet[3292]: E0706 23:56:37.595728 3292 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-76948fb6d9-8kpvd" Jul 6 23:56:37.595979 kubelet[3292]: E0706 23:56:37.595771 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-76948fb6d9-8kpvd_calico-system(30e4ddaa-52d3-4520-b020-bfadbf9c8b21)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-76948fb6d9-8kpvd_calico-system(30e4ddaa-52d3-4520-b020-bfadbf9c8b21)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-76948fb6d9-8kpvd" podUID="30e4ddaa-52d3-4520-b020-bfadbf9c8b21" Jul 6 23:56:38.020747 kubelet[3292]: I0706 23:56:38.020592 3292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Jul 6 23:56:38.024347 containerd[1804]: time="2025-07-06T23:56:38.021578329Z" level=info msg="StopPodSandbox for \"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\"" Jul 6 23:56:38.024347 containerd[1804]: time="2025-07-06T23:56:38.021809230Z" level=info msg="Ensure that sandbox a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538 in task-service has been cleanup successfully" Jul 6 23:56:38.025784 kubelet[3292]: I0706 23:56:38.025751 3292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Jul 6 23:56:38.027391 containerd[1804]: time="2025-07-06T23:56:38.026357955Z" level=info msg="StopPodSandbox for \"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\"" Jul 6 23:56:38.027391 containerd[1804]: time="2025-07-06T23:56:38.026552556Z" level=info msg="Ensure that sandbox 80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e in task-service has been cleanup successfully" Jul 6 23:56:38.030991 kubelet[3292]: I0706 23:56:38.030613 3292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Jul 6 23:56:38.032905 containerd[1804]: time="2025-07-06T23:56:38.032881290Z" level=info msg="StopPodSandbox for \"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\"" Jul 6 23:56:38.034154 containerd[1804]: time="2025-07-06T23:56:38.034005996Z" level=info msg="Ensure that sandbox 826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190 in task-service has been cleanup successfully" Jul 6 23:56:38.042839 containerd[1804]: time="2025-07-06T23:56:38.041509337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 6 23:56:38.043974 kubelet[3292]: I0706 23:56:38.043952 3292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Jul 6 23:56:38.044866 containerd[1804]: time="2025-07-06T23:56:38.044841155Z" level=info msg="StopPodSandbox for \"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\"" Jul 6 23:56:38.046154 containerd[1804]: time="2025-07-06T23:56:38.046123462Z" level=info msg="Ensure that sandbox 167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635 in task-service has been cleanup successfully" Jul 6 23:56:38.054185 kubelet[3292]: I0706 23:56:38.054161 3292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Jul 6 23:56:38.057863 containerd[1804]: time="2025-07-06T23:56:38.057501823Z" level=info msg="StopPodSandbox for \"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\"" Jul 6 23:56:38.058508 containerd[1804]: time="2025-07-06T23:56:38.058482328Z" level=info msg="Ensure that sandbox 9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3 in task-service has been cleanup successfully" Jul 6 23:56:38.075752 kubelet[3292]: I0706 23:56:38.074636 3292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Jul 6 23:56:38.082173 containerd[1804]: time="2025-07-06T23:56:38.081331452Z" level=info msg="StopPodSandbox for \"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\"" Jul 6 23:56:38.091241 containerd[1804]: time="2025-07-06T23:56:38.089829998Z" level=info msg="Ensure that sandbox 69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054 in task-service has been cleanup successfully" Jul 6 23:56:38.124262 kubelet[3292]: I0706 23:56:38.124225 3292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Jul 6 23:56:38.129027 containerd[1804]: time="2025-07-06T23:56:38.128896208Z" level=info msg="StopPodSandbox for \"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\"" Jul 6 23:56:38.129180 containerd[1804]: time="2025-07-06T23:56:38.129111210Z" level=info msg="Ensure that sandbox 33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe in task-service has been cleanup successfully" Jul 6 23:56:38.136370 kubelet[3292]: I0706 23:56:38.135077 3292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Jul 6 23:56:38.137476 containerd[1804]: time="2025-07-06T23:56:38.137357554Z" level=info msg="StopPodSandbox for \"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\"" Jul 6 23:56:38.137812 containerd[1804]: time="2025-07-06T23:56:38.137786656Z" level=info msg="Ensure that sandbox 8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0 in task-service has been cleanup successfully" Jul 6 23:56:38.183231 containerd[1804]: time="2025-07-06T23:56:38.183173801Z" level=error msg="StopPodSandbox for \"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\" failed" error="failed to destroy network for sandbox \"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:38.183729 kubelet[3292]: E0706 23:56:38.183682 3292 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Jul 6 23:56:38.183986 kubelet[3292]: E0706 23:56:38.183909 3292 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e"} Jul 6 23:56:38.184236 kubelet[3292]: E0706 23:56:38.184145 3292 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"84b426be-d8f6-4a60-8c2e-1c346fd9da79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:38.184236 kubelet[3292]: E0706 23:56:38.184186 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"84b426be-d8f6-4a60-8c2e-1c346fd9da79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-xhs8b" podUID="84b426be-d8f6-4a60-8c2e-1c346fd9da79" Jul 6 23:56:38.210736 containerd[1804]: time="2025-07-06T23:56:38.210675550Z" level=error msg="StopPodSandbox for \"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\" failed" error="failed to destroy network for sandbox \"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:38.211290 kubelet[3292]: E0706 23:56:38.210985 3292 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Jul 6 23:56:38.211290 kubelet[3292]: E0706 23:56:38.211050 3292 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538"} Jul 6 23:56:38.211290 kubelet[3292]: E0706 23:56:38.211105 3292 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d05fe5f5-a0d0-4818-841f-97f17bafd42f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:38.211290 kubelet[3292]: E0706 23:56:38.211167 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d05fe5f5-a0d0-4818-841f-97f17bafd42f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q2vkj" podUID="d05fe5f5-a0d0-4818-841f-97f17bafd42f" Jul 6 23:56:38.225156 containerd[1804]: time="2025-07-06T23:56:38.223805521Z" level=error msg="StopPodSandbox for \"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\" failed" error="failed to destroy network for sandbox \"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:38.230386 kubelet[3292]: E0706 23:56:38.230344 3292 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Jul 6 23:56:38.230527 kubelet[3292]: E0706 23:56:38.230398 3292 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635"} Jul 6 23:56:38.230527 kubelet[3292]: E0706 23:56:38.230443 3292 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"716cfedd-0158-4dcb-9ac1-1fdba73e9c13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:38.230527 kubelet[3292]: E0706 23:56:38.230474 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"716cfedd-0158-4dcb-9ac1-1fdba73e9c13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68646bbcb-2gm8s" podUID="716cfedd-0158-4dcb-9ac1-1fdba73e9c13" Jul 6 23:56:38.246335 containerd[1804]: time="2025-07-06T23:56:38.246041541Z" level=error msg="StopPodSandbox for \"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\" failed" error="failed to destroy network for sandbox \"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:38.246471 kubelet[3292]: E0706 23:56:38.246426 3292 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Jul 6 23:56:38.246610 kubelet[3292]: E0706 23:56:38.246475 3292 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190"} Jul 6 23:56:38.246610 kubelet[3292]: E0706 23:56:38.246515 3292 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f805d877-66eb-46da-b324-d84c54cb40ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:38.246610 kubelet[3292]: E0706 23:56:38.246579 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f805d877-66eb-46da-b324-d84c54cb40ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68646bbcb-rvxtr" podUID="f805d877-66eb-46da-b324-d84c54cb40ca" Jul 6 23:56:38.250409 containerd[1804]: time="2025-07-06T23:56:38.250360964Z" level=error msg="StopPodSandbox for \"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\" failed" error="failed to destroy network for sandbox \"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:38.250654 kubelet[3292]: E0706 23:56:38.250602 3292 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Jul 6 23:56:38.250756 kubelet[3292]: E0706 23:56:38.250670 3292 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054"} Jul 6 23:56:38.250756 kubelet[3292]: E0706 23:56:38.250709 3292 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0b182eb4-112d-494c-ad49-a4d43ae37b16\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:38.250756 kubelet[3292]: E0706 23:56:38.250743 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0b182eb4-112d-494c-ad49-a4d43ae37b16\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79f7f6c588-x5rcf" podUID="0b182eb4-112d-494c-ad49-a4d43ae37b16" Jul 6 23:56:38.257508 containerd[1804]: time="2025-07-06T23:56:38.257454902Z" level=error msg="StopPodSandbox for \"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\" failed" error="failed to destroy network for sandbox \"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:38.257852 kubelet[3292]: E0706 23:56:38.257788 3292 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Jul 6 23:56:38.257852 kubelet[3292]: E0706 23:56:38.257831 3292 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3"} Jul 6 23:56:38.258088 kubelet[3292]: E0706 23:56:38.257865 3292 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"30e4ddaa-52d3-4520-b020-bfadbf9c8b21\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:38.258088 kubelet[3292]: E0706 23:56:38.257899 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"30e4ddaa-52d3-4520-b020-bfadbf9c8b21\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-76948fb6d9-8kpvd" podUID="30e4ddaa-52d3-4520-b020-bfadbf9c8b21" Jul 6 23:56:38.258655 containerd[1804]: time="2025-07-06T23:56:38.258617109Z" level=error msg="StopPodSandbox for \"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\" failed" error="failed to destroy network for sandbox \"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:38.259012 kubelet[3292]: E0706 23:56:38.258874 3292 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Jul 6 23:56:38.259012 kubelet[3292]: E0706 23:56:38.258914 3292 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe"} Jul 6 23:56:38.259012 kubelet[3292]: E0706 23:56:38.258954 3292 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"58666ff6-c819-4067-ae41-b5a4a7ab70fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:38.259012 kubelet[3292]: E0706 23:56:38.258984 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"58666ff6-c819-4067-ae41-b5a4a7ab70fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-nh8m8" podUID="58666ff6-c819-4067-ae41-b5a4a7ab70fc" Jul 6 23:56:38.260643 containerd[1804]: time="2025-07-06T23:56:38.260615219Z" level=error msg="StopPodSandbox for \"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\" failed" error="failed to destroy network for sandbox \"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:38.260829 kubelet[3292]: E0706 23:56:38.260790 3292 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Jul 6 23:56:38.260909 kubelet[3292]: E0706 23:56:38.260837 3292 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0"} Jul 6 23:56:38.260909 kubelet[3292]: E0706 23:56:38.260874 3292 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"213a78c2-eb8b-4530-9913-02f60715b4f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:38.260997 kubelet[3292]: E0706 23:56:38.260900 3292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"213a78c2-eb8b-4530-9913-02f60715b4f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-pjxzj" podUID="213a78c2-eb8b-4530-9913-02f60715b4f4" Jul 6 23:56:46.313458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4023371423.mount: Deactivated successfully. Jul 6 23:56:46.361929 containerd[1804]: time="2025-07-06T23:56:46.361867510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:46.364026 containerd[1804]: time="2025-07-06T23:56:46.363954820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 6 23:56:46.367268 containerd[1804]: time="2025-07-06T23:56:46.367216737Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:46.373254 containerd[1804]: time="2025-07-06T23:56:46.373195468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:46.374306 containerd[1804]: time="2025-07-06T23:56:46.373824071Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 8.332075633s" Jul 6 23:56:46.374306 containerd[1804]: time="2025-07-06T23:56:46.373864371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 6 23:56:46.392949 containerd[1804]: time="2025-07-06T23:56:46.392903970Z" level=info msg="CreateContainer within sandbox \"a397854aae93e2873a480d698676457189a6053bd1fb3024eb958b3c8f68563b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 6 23:56:46.437846 containerd[1804]: time="2025-07-06T23:56:46.437795901Z" level=info msg="CreateContainer within sandbox \"a397854aae93e2873a480d698676457189a6053bd1fb3024eb958b3c8f68563b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c3e500f2efeaca2826accf41cffd1e492402943f3183057d247203250357201b\"" Jul 6 23:56:46.438485 containerd[1804]: time="2025-07-06T23:56:46.438432104Z" level=info msg="StartContainer for \"c3e500f2efeaca2826accf41cffd1e492402943f3183057d247203250357201b\"" Jul 6 23:56:46.497098 containerd[1804]: time="2025-07-06T23:56:46.496992906Z" level=info msg="StartContainer for \"c3e500f2efeaca2826accf41cffd1e492402943f3183057d247203250357201b\" returns successfully" Jul 6 23:56:46.839441 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 6 23:56:46.839620 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 6 23:56:46.957490 containerd[1804]: time="2025-07-06T23:56:46.957432278Z" level=info msg="StopPodSandbox for \"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\"" Jul 6 23:56:47.121365 containerd[1804]: 2025-07-06 23:56:47.044 [INFO][4498] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Jul 6 23:56:47.121365 containerd[1804]: 2025-07-06 23:56:47.045 [INFO][4498] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" iface="eth0" netns="/var/run/netns/cni-c5b7f08c-b629-71f6-9a61-a99aea1ef46e" Jul 6 23:56:47.121365 containerd[1804]: 2025-07-06 23:56:47.045 [INFO][4498] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" iface="eth0" netns="/var/run/netns/cni-c5b7f08c-b629-71f6-9a61-a99aea1ef46e" Jul 6 23:56:47.121365 containerd[1804]: 2025-07-06 23:56:47.045 [INFO][4498] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" iface="eth0" netns="/var/run/netns/cni-c5b7f08c-b629-71f6-9a61-a99aea1ef46e" Jul 6 23:56:47.121365 containerd[1804]: 2025-07-06 23:56:47.045 [INFO][4498] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Jul 6 23:56:47.121365 containerd[1804]: 2025-07-06 23:56:47.045 [INFO][4498] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Jul 6 23:56:47.121365 containerd[1804]: 2025-07-06 23:56:47.104 [INFO][4506] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" HandleID="k8s-pod-network.9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Workload="ci--4081.3.4--a--fe0535f741-k8s-whisker--76948fb6d9--8kpvd-eth0" Jul 6 23:56:47.121365 containerd[1804]: 2025-07-06 23:56:47.104 [INFO][4506] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:47.121365 containerd[1804]: 2025-07-06 23:56:47.105 [INFO][4506] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:47.121365 containerd[1804]: 2025-07-06 23:56:47.113 [WARNING][4506] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" HandleID="k8s-pod-network.9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Workload="ci--4081.3.4--a--fe0535f741-k8s-whisker--76948fb6d9--8kpvd-eth0" Jul 6 23:56:47.121365 containerd[1804]: 2025-07-06 23:56:47.113 [INFO][4506] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" HandleID="k8s-pod-network.9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Workload="ci--4081.3.4--a--fe0535f741-k8s-whisker--76948fb6d9--8kpvd-eth0" Jul 6 23:56:47.121365 containerd[1804]: 2025-07-06 23:56:47.115 [INFO][4506] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:47.121365 containerd[1804]: 2025-07-06 23:56:47.119 [INFO][4498] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Jul 6 23:56:47.122104 containerd[1804]: time="2025-07-06T23:56:47.121969925Z" level=info msg="TearDown network for sandbox \"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\" successfully" Jul 6 23:56:47.122104 containerd[1804]: time="2025-07-06T23:56:47.122022426Z" level=info msg="StopPodSandbox for \"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\" returns successfully" Jul 6 23:56:47.189868 kubelet[3292]: I0706 23:56:47.189157 3292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mtjlq" podStartSLOduration=1.669723899 podStartE2EDuration="25.189112171s" podCreationTimestamp="2025-07-06 23:56:22 +0000 UTC" firstStartedPulling="2025-07-06 23:56:22.855334904 +0000 UTC m=+19.611666745" lastFinishedPulling="2025-07-06 23:56:46.374723276 +0000 UTC m=+43.131055017" observedRunningTime="2025-07-06 23:56:47.187710964 +0000 UTC m=+43.944042705" watchObservedRunningTime="2025-07-06 23:56:47.189112171 +0000 UTC m=+43.945444012" Jul 6 23:56:47.287109 kubelet[3292]: I0706 23:56:47.285545 3292 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30e4ddaa-52d3-4520-b020-bfadbf9c8b21-whisker-ca-bundle\") pod \"30e4ddaa-52d3-4520-b020-bfadbf9c8b21\" (UID: \"30e4ddaa-52d3-4520-b020-bfadbf9c8b21\") " Jul 6 23:56:47.287109 kubelet[3292]: I0706 23:56:47.285606 3292 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30e4ddaa-52d3-4520-b020-bfadbf9c8b21-whisker-backend-key-pair\") pod \"30e4ddaa-52d3-4520-b020-bfadbf9c8b21\" (UID: \"30e4ddaa-52d3-4520-b020-bfadbf9c8b21\") " Jul 6 23:56:47.287109 kubelet[3292]: I0706 23:56:47.285646 3292 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjflp\" (UniqueName: \"kubernetes.io/projected/30e4ddaa-52d3-4520-b020-bfadbf9c8b21-kube-api-access-sjflp\") pod \"30e4ddaa-52d3-4520-b020-bfadbf9c8b21\" (UID: \"30e4ddaa-52d3-4520-b020-bfadbf9c8b21\") " Jul 6 23:56:47.287109 kubelet[3292]: I0706 23:56:47.286024 3292 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30e4ddaa-52d3-4520-b020-bfadbf9c8b21-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "30e4ddaa-52d3-4520-b020-bfadbf9c8b21" (UID: "30e4ddaa-52d3-4520-b020-bfadbf9c8b21"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 6 23:56:47.290370 kubelet[3292]: I0706 23:56:47.290329 3292 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30e4ddaa-52d3-4520-b020-bfadbf9c8b21-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "30e4ddaa-52d3-4520-b020-bfadbf9c8b21" (UID: "30e4ddaa-52d3-4520-b020-bfadbf9c8b21"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 6 23:56:47.290513 kubelet[3292]: I0706 23:56:47.290372 3292 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30e4ddaa-52d3-4520-b020-bfadbf9c8b21-kube-api-access-sjflp" (OuterVolumeSpecName: "kube-api-access-sjflp") pod "30e4ddaa-52d3-4520-b020-bfadbf9c8b21" (UID: "30e4ddaa-52d3-4520-b020-bfadbf9c8b21"). InnerVolumeSpecName "kube-api-access-sjflp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:56:47.314955 systemd[1]: run-netns-cni\x2dc5b7f08c\x2db629\x2d71f6\x2d9a61\x2da99aea1ef46e.mount: Deactivated successfully. Jul 6 23:56:47.315165 systemd[1]: var-lib-kubelet-pods-30e4ddaa\x2d52d3\x2d4520\x2db020\x2dbfadbf9c8b21-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsjflp.mount: Deactivated successfully. Jul 6 23:56:47.315318 systemd[1]: var-lib-kubelet-pods-30e4ddaa\x2d52d3\x2d4520\x2db020\x2dbfadbf9c8b21-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 6 23:56:47.386444 kubelet[3292]: I0706 23:56:47.386282 3292 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30e4ddaa-52d3-4520-b020-bfadbf9c8b21-whisker-ca-bundle\") on node \"ci-4081.3.4-a-fe0535f741\" DevicePath \"\"" Jul 6 23:56:47.386444 kubelet[3292]: I0706 23:56:47.386330 3292 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30e4ddaa-52d3-4520-b020-bfadbf9c8b21-whisker-backend-key-pair\") on node \"ci-4081.3.4-a-fe0535f741\" DevicePath \"\"" Jul 6 23:56:47.386444 kubelet[3292]: I0706 23:56:47.386348 3292 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjflp\" (UniqueName: \"kubernetes.io/projected/30e4ddaa-52d3-4520-b020-bfadbf9c8b21-kube-api-access-sjflp\") on node \"ci-4081.3.4-a-fe0535f741\" DevicePath \"\"" Jul 6 23:56:47.687888 kubelet[3292]: I0706 23:56:47.687719 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71049605-d73c-4340-9e15-af75e1878481-whisker-ca-bundle\") pod \"whisker-698d94675f-jg87x\" (UID: \"71049605-d73c-4340-9e15-af75e1878481\") " pod="calico-system/whisker-698d94675f-jg87x" Jul 6 23:56:47.687888 kubelet[3292]: I0706 23:56:47.687793 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/71049605-d73c-4340-9e15-af75e1878481-whisker-backend-key-pair\") pod \"whisker-698d94675f-jg87x\" (UID: \"71049605-d73c-4340-9e15-af75e1878481\") " pod="calico-system/whisker-698d94675f-jg87x" Jul 6 23:56:47.687888 kubelet[3292]: I0706 23:56:47.687824 3292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqc62\" (UniqueName: \"kubernetes.io/projected/71049605-d73c-4340-9e15-af75e1878481-kube-api-access-gqc62\") pod \"whisker-698d94675f-jg87x\" (UID: \"71049605-d73c-4340-9e15-af75e1878481\") " pod="calico-system/whisker-698d94675f-jg87x" Jul 6 23:56:47.724741 kubelet[3292]: I0706 23:56:47.724097 3292 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:56:47.856862 containerd[1804]: time="2025-07-06T23:56:47.856812292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-698d94675f-jg87x,Uid:71049605-d73c-4340-9e15-af75e1878481,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:47.891482 kubelet[3292]: I0706 23:56:47.891441 3292 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30e4ddaa-52d3-4520-b020-bfadbf9c8b21" path="/var/lib/kubelet/pods/30e4ddaa-52d3-4520-b020-bfadbf9c8b21/volumes" Jul 6 23:56:48.026826 systemd-networkd[1364]: calif7874d3fd72: Link UP Jul 6 23:56:48.029339 systemd-networkd[1364]: calif7874d3fd72: Gained carrier Jul 6 23:56:48.048821 containerd[1804]: 2025-07-06 23:56:47.937 [INFO][4530] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:56:48.048821 containerd[1804]: 2025-07-06 23:56:47.947 [INFO][4530] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--fe0535f741-k8s-whisker--698d94675f--jg87x-eth0 whisker-698d94675f- calico-system 71049605-d73c-4340-9e15-af75e1878481 885 0 2025-07-06 23:56:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:698d94675f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.4-a-fe0535f741 whisker-698d94675f-jg87x eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif7874d3fd72 [] [] }} ContainerID="a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410" Namespace="calico-system" Pod="whisker-698d94675f-jg87x" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-whisker--698d94675f--jg87x-" Jul 6 23:56:48.048821 containerd[1804]: 2025-07-06 23:56:47.947 [INFO][4530] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410" Namespace="calico-system" Pod="whisker-698d94675f-jg87x" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-whisker--698d94675f--jg87x-eth0" Jul 6 23:56:48.048821 containerd[1804]: 2025-07-06 23:56:47.971 [INFO][4541] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410" HandleID="k8s-pod-network.a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410" Workload="ci--4081.3.4--a--fe0535f741-k8s-whisker--698d94675f--jg87x-eth0" Jul 6 23:56:48.048821 containerd[1804]: 2025-07-06 23:56:47.971 [INFO][4541] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410" HandleID="k8s-pod-network.a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410" Workload="ci--4081.3.4--a--fe0535f741-k8s-whisker--698d94675f--jg87x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-fe0535f741", "pod":"whisker-698d94675f-jg87x", "timestamp":"2025-07-06 23:56:47.971814117 +0000 UTC"}, Hostname:"ci-4081.3.4-a-fe0535f741", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:48.048821 containerd[1804]: 2025-07-06 23:56:47.972 [INFO][4541] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:48.048821 containerd[1804]: 2025-07-06 23:56:47.972 [INFO][4541] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:48.048821 containerd[1804]: 2025-07-06 23:56:47.972 [INFO][4541] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-fe0535f741' Jul 6 23:56:48.048821 containerd[1804]: 2025-07-06 23:56:47.979 [INFO][4541] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:48.048821 containerd[1804]: 2025-07-06 23:56:47.982 [INFO][4541] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:48.048821 containerd[1804]: 2025-07-06 23:56:47.986 [INFO][4541] ipam/ipam.go 511: Trying affinity for 192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:48.048821 containerd[1804]: 2025-07-06 23:56:47.987 [INFO][4541] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:48.048821 containerd[1804]: 2025-07-06 23:56:47.989 [INFO][4541] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:48.048821 containerd[1804]: 2025-07-06 23:56:47.989 [INFO][4541] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:48.048821 containerd[1804]: 2025-07-06 23:56:47.990 [INFO][4541] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410 Jul 6 23:56:48.048821 containerd[1804]: 2025-07-06 23:56:47.997 [INFO][4541] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:48.048821 containerd[1804]: 2025-07-06 23:56:48.003 [INFO][4541] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.193/26] block=192.168.50.192/26 handle="k8s-pod-network.a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:48.048821 containerd[1804]: 2025-07-06 23:56:48.003 [INFO][4541] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.193/26] handle="k8s-pod-network.a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:48.048821 containerd[1804]: 2025-07-06 23:56:48.003 [INFO][4541] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:48.048821 containerd[1804]: 2025-07-06 23:56:48.003 [INFO][4541] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.193/26] IPv6=[] ContainerID="a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410" HandleID="k8s-pod-network.a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410" Workload="ci--4081.3.4--a--fe0535f741-k8s-whisker--698d94675f--jg87x-eth0" Jul 6 23:56:48.050043 containerd[1804]: 2025-07-06 23:56:48.005 [INFO][4530] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410" Namespace="calico-system" Pod="whisker-698d94675f-jg87x" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-whisker--698d94675f--jg87x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-whisker--698d94675f--jg87x-eth0", GenerateName:"whisker-698d94675f-", Namespace:"calico-system", SelfLink:"", UID:"71049605-d73c-4340-9e15-af75e1878481", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"698d94675f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"", Pod:"whisker-698d94675f-jg87x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif7874d3fd72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:48.050043 containerd[1804]: 2025-07-06 23:56:48.005 [INFO][4530] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.193/32] ContainerID="a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410" Namespace="calico-system" Pod="whisker-698d94675f-jg87x" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-whisker--698d94675f--jg87x-eth0" Jul 6 23:56:48.050043 containerd[1804]: 2025-07-06 23:56:48.005 [INFO][4530] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7874d3fd72 ContainerID="a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410" Namespace="calico-system" Pod="whisker-698d94675f-jg87x" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-whisker--698d94675f--jg87x-eth0" Jul 6 23:56:48.050043 containerd[1804]: 2025-07-06 23:56:48.028 [INFO][4530] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410" Namespace="calico-system" Pod="whisker-698d94675f-jg87x" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-whisker--698d94675f--jg87x-eth0" Jul 6 23:56:48.050043 containerd[1804]: 2025-07-06 23:56:48.028 [INFO][4530] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410" Namespace="calico-system" Pod="whisker-698d94675f-jg87x" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-whisker--698d94675f--jg87x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-whisker--698d94675f--jg87x-eth0", GenerateName:"whisker-698d94675f-", Namespace:"calico-system", SelfLink:"", UID:"71049605-d73c-4340-9e15-af75e1878481", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"698d94675f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410", Pod:"whisker-698d94675f-jg87x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif7874d3fd72", MAC:"fa:eb:9a:f2:50:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:48.050043 containerd[1804]: 2025-07-06 23:56:48.046 [INFO][4530] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410" Namespace="calico-system" Pod="whisker-698d94675f-jg87x" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-whisker--698d94675f--jg87x-eth0" Jul 6 23:56:48.070106 containerd[1804]: time="2025-07-06T23:56:48.070012465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:48.070106 containerd[1804]: time="2025-07-06T23:56:48.070058565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:48.070106 containerd[1804]: time="2025-07-06T23:56:48.070072066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:48.070531 containerd[1804]: time="2025-07-06T23:56:48.070434067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:48.120805 containerd[1804]: time="2025-07-06T23:56:48.120760397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-698d94675f-jg87x,Uid:71049605-d73c-4340-9e15-af75e1878481,Namespace:calico-system,Attempt:0,} returns sandbox id \"a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410\"" Jul 6 23:56:48.122441 containerd[1804]: time="2025-07-06T23:56:48.122410005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 6 23:56:48.665256 kernel: bpftool[4714]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 6 23:56:48.890144 containerd[1804]: time="2025-07-06T23:56:48.888532503Z" level=info msg="StopPodSandbox for \"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\"" Jul 6 23:56:49.018195 systemd-networkd[1364]: vxlan.calico: Link UP Jul 6 23:56:49.018206 systemd-networkd[1364]: vxlan.calico: Gained carrier Jul 6 23:56:49.057972 containerd[1804]: 2025-07-06 23:56:48.972 [INFO][4737] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Jul 6 23:56:49.057972 containerd[1804]: 2025-07-06 23:56:48.972 [INFO][4737] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" iface="eth0" netns="/var/run/netns/cni-238a99d1-5d3a-1437-8501-b5d3ec88d299" Jul 6 23:56:49.057972 containerd[1804]: 2025-07-06 23:56:48.972 [INFO][4737] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" iface="eth0" netns="/var/run/netns/cni-238a99d1-5d3a-1437-8501-b5d3ec88d299" Jul 6 23:56:49.057972 containerd[1804]: 2025-07-06 23:56:48.972 [INFO][4737] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" iface="eth0" netns="/var/run/netns/cni-238a99d1-5d3a-1437-8501-b5d3ec88d299" Jul 6 23:56:49.057972 containerd[1804]: 2025-07-06 23:56:48.973 [INFO][4737] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Jul 6 23:56:49.057972 containerd[1804]: 2025-07-06 23:56:48.973 [INFO][4737] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Jul 6 23:56:49.057972 containerd[1804]: 2025-07-06 23:56:49.006 [INFO][4747] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" HandleID="k8s-pod-network.826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0" Jul 6 23:56:49.057972 containerd[1804]: 2025-07-06 23:56:49.006 [INFO][4747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:49.057972 containerd[1804]: 2025-07-06 23:56:49.006 [INFO][4747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:49.057972 containerd[1804]: 2025-07-06 23:56:49.025 [WARNING][4747] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" HandleID="k8s-pod-network.826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0" Jul 6 23:56:49.057972 containerd[1804]: 2025-07-06 23:56:49.026 [INFO][4747] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" HandleID="k8s-pod-network.826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0" Jul 6 23:56:49.057972 containerd[1804]: 2025-07-06 23:56:49.030 [INFO][4747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:49.057972 containerd[1804]: 2025-07-06 23:56:49.051 [INFO][4737] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Jul 6 23:56:49.057972 containerd[1804]: time="2025-07-06T23:56:49.056510370Z" level=info msg="TearDown network for sandbox \"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\" successfully" Jul 6 23:56:49.057972 containerd[1804]: time="2025-07-06T23:56:49.056546270Z" level=info msg="StopPodSandbox for \"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\" returns successfully" Jul 6 23:56:49.063401 containerd[1804]: time="2025-07-06T23:56:49.060020086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68646bbcb-rvxtr,Uid:f805d877-66eb-46da-b324-d84c54cb40ca,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:56:49.066036 systemd[1]: run-netns-cni\x2d238a99d1\x2d5d3a\x2d1437\x2d8501\x2db5d3ec88d299.mount: Deactivated successfully. Jul 6 23:56:49.289327 systemd-networkd[1364]: calif7874d3fd72: Gained IPv6LL Jul 6 23:56:49.294613 systemd-networkd[1364]: cali2ff430243d8: Link UP Jul 6 23:56:49.295619 systemd-networkd[1364]: cali2ff430243d8: Gained carrier Jul 6 23:56:49.320957 containerd[1804]: 2025-07-06 23:56:49.212 [INFO][4775] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0 calico-apiserver-68646bbcb- calico-apiserver f805d877-66eb-46da-b324-d84c54cb40ca 900 0 2025-07-06 23:56:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68646bbcb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-a-fe0535f741 calico-apiserver-68646bbcb-rvxtr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2ff430243d8 [] [] }} ContainerID="e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860" Namespace="calico-apiserver" Pod="calico-apiserver-68646bbcb-rvxtr" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-" Jul 6 23:56:49.320957 containerd[1804]: 2025-07-06 23:56:49.212 [INFO][4775] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860" Namespace="calico-apiserver" Pod="calico-apiserver-68646bbcb-rvxtr" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0" Jul 6 23:56:49.320957 containerd[1804]: 2025-07-06 23:56:49.247 [INFO][4786] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860" HandleID="k8s-pod-network.e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0" Jul 6 23:56:49.320957 containerd[1804]: 2025-07-06 23:56:49.248 [INFO][4786] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860" HandleID="k8s-pod-network.e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-a-fe0535f741", "pod":"calico-apiserver-68646bbcb-rvxtr", "timestamp":"2025-07-06 23:56:49.247646143 +0000 UTC"}, Hostname:"ci-4081.3.4-a-fe0535f741", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:49.320957 containerd[1804]: 2025-07-06 23:56:49.248 [INFO][4786] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:49.320957 containerd[1804]: 2025-07-06 23:56:49.248 [INFO][4786] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:49.320957 containerd[1804]: 2025-07-06 23:56:49.248 [INFO][4786] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-fe0535f741' Jul 6 23:56:49.320957 containerd[1804]: 2025-07-06 23:56:49.256 [INFO][4786] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:49.320957 containerd[1804]: 2025-07-06 23:56:49.260 [INFO][4786] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:49.320957 containerd[1804]: 2025-07-06 23:56:49.264 [INFO][4786] ipam/ipam.go 511: Trying affinity for 192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:49.320957 containerd[1804]: 2025-07-06 23:56:49.266 [INFO][4786] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:49.320957 containerd[1804]: 2025-07-06 23:56:49.269 [INFO][4786] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:49.320957 containerd[1804]: 2025-07-06 23:56:49.269 [INFO][4786] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:49.320957 containerd[1804]: 2025-07-06 23:56:49.271 [INFO][4786] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860 Jul 6 23:56:49.320957 containerd[1804]: 2025-07-06 23:56:49.276 [INFO][4786] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:49.320957 containerd[1804]: 2025-07-06 23:56:49.285 [INFO][4786] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.194/26] block=192.168.50.192/26 handle="k8s-pod-network.e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:49.320957 containerd[1804]: 2025-07-06 23:56:49.285 [INFO][4786] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.194/26] handle="k8s-pod-network.e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:49.320957 containerd[1804]: 2025-07-06 23:56:49.286 [INFO][4786] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:49.320957 containerd[1804]: 2025-07-06 23:56:49.286 [INFO][4786] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.194/26] IPv6=[] ContainerID="e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860" HandleID="k8s-pod-network.e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0" Jul 6 23:56:49.324652 containerd[1804]: 2025-07-06 23:56:49.291 [INFO][4775] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860" Namespace="calico-apiserver" Pod="calico-apiserver-68646bbcb-rvxtr" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0", GenerateName:"calico-apiserver-68646bbcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"f805d877-66eb-46da-b324-d84c54cb40ca", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68646bbcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"", Pod:"calico-apiserver-68646bbcb-rvxtr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ff430243d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:49.324652 containerd[1804]: 2025-07-06 23:56:49.291 [INFO][4775] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.194/32] ContainerID="e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860" Namespace="calico-apiserver" Pod="calico-apiserver-68646bbcb-rvxtr" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0" Jul 6 23:56:49.324652 containerd[1804]: 2025-07-06 23:56:49.291 [INFO][4775] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ff430243d8 ContainerID="e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860" Namespace="calico-apiserver" Pod="calico-apiserver-68646bbcb-rvxtr" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0" Jul 6 23:56:49.324652 containerd[1804]: 2025-07-06 23:56:49.296 [INFO][4775] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860" Namespace="calico-apiserver" Pod="calico-apiserver-68646bbcb-rvxtr" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0" Jul 6 23:56:49.324652 containerd[1804]: 2025-07-06 23:56:49.296 [INFO][4775] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860" Namespace="calico-apiserver" Pod="calico-apiserver-68646bbcb-rvxtr" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0", GenerateName:"calico-apiserver-68646bbcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"f805d877-66eb-46da-b324-d84c54cb40ca", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68646bbcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860", Pod:"calico-apiserver-68646bbcb-rvxtr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ff430243d8", MAC:"02:6f:f6:4c:05:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:49.324652 containerd[1804]: 2025-07-06 23:56:49.317 [INFO][4775] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860" Namespace="calico-apiserver" Pod="calico-apiserver-68646bbcb-rvxtr" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0" Jul 6 23:56:49.351336 containerd[1804]: time="2025-07-06T23:56:49.350672513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:49.351336 containerd[1804]: time="2025-07-06T23:56:49.350746314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:49.351336 containerd[1804]: time="2025-07-06T23:56:49.350769714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:49.352088 containerd[1804]: time="2025-07-06T23:56:49.351455917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:49.476881 containerd[1804]: time="2025-07-06T23:56:49.476832089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68646bbcb-rvxtr,Uid:f805d877-66eb-46da-b324-d84c54cb40ca,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860\"" Jul 6 23:56:49.759548 containerd[1804]: time="2025-07-06T23:56:49.758774377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:49.772183 containerd[1804]: time="2025-07-06T23:56:49.772109338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 6 23:56:49.776492 containerd[1804]: time="2025-07-06T23:56:49.776457858Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:49.781475 containerd[1804]: time="2025-07-06T23:56:49.781411080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:49.782835 containerd[1804]: time="2025-07-06T23:56:49.782654786Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.660107981s" Jul 6 23:56:49.782835 containerd[1804]: time="2025-07-06T23:56:49.782714286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 6 23:56:49.784875 containerd[1804]: time="2025-07-06T23:56:49.784848396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:56:49.787014 containerd[1804]: time="2025-07-06T23:56:49.786847905Z" level=info msg="CreateContainer within sandbox \"a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 6 23:56:49.824287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2677544198.mount: Deactivated successfully. Jul 6 23:56:49.826969 containerd[1804]: time="2025-07-06T23:56:49.826394186Z" level=info msg="CreateContainer within sandbox \"a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"0710820babcd9a1e2932859b39c51a6b0ac0bf23e26a63afb4181bae1400d051\"" Jul 6 23:56:49.828472 containerd[1804]: time="2025-07-06T23:56:49.828332295Z" level=info msg="StartContainer for \"0710820babcd9a1e2932859b39c51a6b0ac0bf23e26a63afb4181bae1400d051\"" Jul 6 23:56:49.891796 containerd[1804]: time="2025-07-06T23:56:49.891401983Z" level=info msg="StopPodSandbox for \"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\"" Jul 6 23:56:49.893968 containerd[1804]: time="2025-07-06T23:56:49.892423287Z" level=info msg="StopPodSandbox for \"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\"" Jul 6 23:56:49.894619 containerd[1804]: time="2025-07-06T23:56:49.894235695Z" level=info msg="StopPodSandbox for \"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\"" Jul 6 23:56:50.005107 containerd[1804]: time="2025-07-06T23:56:50.004275498Z" level=info msg="StartContainer for \"0710820babcd9a1e2932859b39c51a6b0ac0bf23e26a63afb4181bae1400d051\" returns successfully" Jul 6 23:56:50.088754 containerd[1804]: 2025-07-06 23:56:50.015 [INFO][4936] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Jul 6 23:56:50.088754 containerd[1804]: 2025-07-06 23:56:50.017 [INFO][4936] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" iface="eth0" netns="/var/run/netns/cni-efad13d6-090a-8b2a-1818-b4e3a2a65568" Jul 6 23:56:50.088754 containerd[1804]: 2025-07-06 23:56:50.017 [INFO][4936] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" iface="eth0" netns="/var/run/netns/cni-efad13d6-090a-8b2a-1818-b4e3a2a65568" Jul 6 23:56:50.088754 containerd[1804]: 2025-07-06 23:56:50.018 [INFO][4936] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" iface="eth0" netns="/var/run/netns/cni-efad13d6-090a-8b2a-1818-b4e3a2a65568" Jul 6 23:56:50.088754 containerd[1804]: 2025-07-06 23:56:50.018 [INFO][4936] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Jul 6 23:56:50.088754 containerd[1804]: 2025-07-06 23:56:50.018 [INFO][4936] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Jul 6 23:56:50.088754 containerd[1804]: 2025-07-06 23:56:50.068 [INFO][4963] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" HandleID="k8s-pod-network.80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Workload="ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0" Jul 6 23:56:50.088754 containerd[1804]: 2025-07-06 23:56:50.069 [INFO][4963] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:50.088754 containerd[1804]: 2025-07-06 23:56:50.069 [INFO][4963] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:50.088754 containerd[1804]: 2025-07-06 23:56:50.077 [WARNING][4963] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" HandleID="k8s-pod-network.80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Workload="ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0" Jul 6 23:56:50.088754 containerd[1804]: 2025-07-06 23:56:50.077 [INFO][4963] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" HandleID="k8s-pod-network.80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Workload="ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0" Jul 6 23:56:50.088754 containerd[1804]: 2025-07-06 23:56:50.079 [INFO][4963] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:50.088754 containerd[1804]: 2025-07-06 23:56:50.083 [INFO][4936] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Jul 6 23:56:50.090799 containerd[1804]: time="2025-07-06T23:56:50.089711188Z" level=info msg="TearDown network for sandbox \"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\" successfully" Jul 6 23:56:50.090799 containerd[1804]: time="2025-07-06T23:56:50.089746888Z" level=info msg="StopPodSandbox for \"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\" returns successfully" Jul 6 23:56:50.092207 containerd[1804]: time="2025-07-06T23:56:50.091721497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-xhs8b,Uid:84b426be-d8f6-4a60-8c2e-1c346fd9da79,Namespace:calico-system,Attempt:1,}" Jul 6 23:56:50.103015 containerd[1804]: 2025-07-06 23:56:50.016 [INFO][4935] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Jul 6 23:56:50.103015 containerd[1804]: 2025-07-06 23:56:50.019 [INFO][4935] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" iface="eth0" netns="/var/run/netns/cni-796652c1-7d90-6427-bbaa-b2df2c2d7925" Jul 6 23:56:50.103015 containerd[1804]: 2025-07-06 23:56:50.019 [INFO][4935] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" iface="eth0" netns="/var/run/netns/cni-796652c1-7d90-6427-bbaa-b2df2c2d7925" Jul 6 23:56:50.103015 containerd[1804]: 2025-07-06 23:56:50.021 [INFO][4935] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" iface="eth0" netns="/var/run/netns/cni-796652c1-7d90-6427-bbaa-b2df2c2d7925" Jul 6 23:56:50.103015 containerd[1804]: 2025-07-06 23:56:50.021 [INFO][4935] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Jul 6 23:56:50.103015 containerd[1804]: 2025-07-06 23:56:50.021 [INFO][4935] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Jul 6 23:56:50.103015 containerd[1804]: 2025-07-06 23:56:50.084 [INFO][4966] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" HandleID="k8s-pod-network.167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0" Jul 6 23:56:50.103015 containerd[1804]: 2025-07-06 23:56:50.086 [INFO][4966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:50.103015 containerd[1804]: 2025-07-06 23:56:50.086 [INFO][4966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:50.103015 containerd[1804]: 2025-07-06 23:56:50.096 [WARNING][4966] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" HandleID="k8s-pod-network.167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0" Jul 6 23:56:50.103015 containerd[1804]: 2025-07-06 23:56:50.096 [INFO][4966] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" HandleID="k8s-pod-network.167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0" Jul 6 23:56:50.103015 containerd[1804]: 2025-07-06 23:56:50.098 [INFO][4966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:50.103015 containerd[1804]: 2025-07-06 23:56:50.100 [INFO][4935] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Jul 6 23:56:50.103015 containerd[1804]: time="2025-07-06T23:56:50.102883248Z" level=info msg="TearDown network for sandbox \"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\" successfully" Jul 6 23:56:50.103015 containerd[1804]: time="2025-07-06T23:56:50.102907048Z" level=info msg="StopPodSandbox for \"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\" returns successfully" Jul 6 23:56:50.104287 containerd[1804]: time="2025-07-06T23:56:50.104224954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68646bbcb-2gm8s,Uid:716cfedd-0158-4dcb-9ac1-1fdba73e9c13,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:56:50.123336 containerd[1804]: 2025-07-06 23:56:50.066 [INFO][4931] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Jul 6 23:56:50.123336 containerd[1804]: 2025-07-06 23:56:50.067 [INFO][4931] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" iface="eth0" netns="/var/run/netns/cni-01605436-3058-c94f-77f1-51279af821d9" Jul 6 23:56:50.123336 containerd[1804]: 2025-07-06 23:56:50.067 [INFO][4931] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" iface="eth0" netns="/var/run/netns/cni-01605436-3058-c94f-77f1-51279af821d9" Jul 6 23:56:50.123336 containerd[1804]: 2025-07-06 23:56:50.067 [INFO][4931] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" iface="eth0" netns="/var/run/netns/cni-01605436-3058-c94f-77f1-51279af821d9" Jul 6 23:56:50.123336 containerd[1804]: 2025-07-06 23:56:50.067 [INFO][4931] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Jul 6 23:56:50.123336 containerd[1804]: 2025-07-06 23:56:50.067 [INFO][4931] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Jul 6 23:56:50.123336 containerd[1804]: 2025-07-06 23:56:50.111 [INFO][4979] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" HandleID="k8s-pod-network.69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0" Jul 6 23:56:50.123336 containerd[1804]: 2025-07-06 23:56:50.112 [INFO][4979] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:50.123336 containerd[1804]: 2025-07-06 23:56:50.112 [INFO][4979] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:50.123336 containerd[1804]: 2025-07-06 23:56:50.119 [WARNING][4979] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" HandleID="k8s-pod-network.69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0" Jul 6 23:56:50.123336 containerd[1804]: 2025-07-06 23:56:50.119 [INFO][4979] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" HandleID="k8s-pod-network.69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0" Jul 6 23:56:50.123336 containerd[1804]: 2025-07-06 23:56:50.120 [INFO][4979] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:50.123336 containerd[1804]: 2025-07-06 23:56:50.122 [INFO][4931] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Jul 6 23:56:50.124046 containerd[1804]: time="2025-07-06T23:56:50.123458642Z" level=info msg="TearDown network for sandbox \"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\" successfully" Jul 6 23:56:50.124046 containerd[1804]: time="2025-07-06T23:56:50.123487542Z" level=info msg="StopPodSandbox for \"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\" returns successfully" Jul 6 23:56:50.124379 containerd[1804]: time="2025-07-06T23:56:50.124353346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79f7f6c588-x5rcf,Uid:0b182eb4-112d-494c-ad49-a4d43ae37b16,Namespace:calico-system,Attempt:1,}" Jul 6 23:56:50.359770 systemd-networkd[1364]: calibd99629f6b4: Link UP Jul 6 23:56:50.370230 systemd-networkd[1364]: calibd99629f6b4: Gained carrier Jul 6 23:56:50.381849 systemd[1]: run-netns-cni\x2d796652c1\x2d7d90\x2d6427\x2dbbaa\x2db2df2c2d7925.mount: Deactivated successfully. Jul 6 23:56:50.382051 systemd[1]: run-netns-cni\x2d01605436\x2d3058\x2dc94f\x2d77f1\x2d51279af821d9.mount: Deactivated successfully. Jul 6 23:56:50.382291 systemd[1]: run-netns-cni\x2defad13d6\x2d090a\x2d8b2a\x2d1818\x2db4e3a2a65568.mount: Deactivated successfully. Jul 6 23:56:50.393148 containerd[1804]: 2025-07-06 23:56:50.235 [INFO][4987] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0 calico-apiserver-68646bbcb- calico-apiserver 716cfedd-0158-4dcb-9ac1-1fdba73e9c13 911 0 2025-07-06 23:56:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68646bbcb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-a-fe0535f741 calico-apiserver-68646bbcb-2gm8s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibd99629f6b4 [] [] }} ContainerID="b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6" Namespace="calico-apiserver" Pod="calico-apiserver-68646bbcb-2gm8s" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-" Jul 6 23:56:50.393148 containerd[1804]: 2025-07-06 23:56:50.235 [INFO][4987] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6" Namespace="calico-apiserver" Pod="calico-apiserver-68646bbcb-2gm8s" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0" Jul 6 23:56:50.393148 containerd[1804]: 2025-07-06 23:56:50.304 [INFO][5019] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6" HandleID="k8s-pod-network.b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0" Jul 6 23:56:50.393148 containerd[1804]: 2025-07-06 23:56:50.304 [INFO][5019] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6" HandleID="k8s-pod-network.b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5920), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-a-fe0535f741", "pod":"calico-apiserver-68646bbcb-2gm8s", "timestamp":"2025-07-06 23:56:50.304148567 +0000 UTC"}, Hostname:"ci-4081.3.4-a-fe0535f741", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:50.393148 containerd[1804]: 2025-07-06 23:56:50.304 [INFO][5019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:50.393148 containerd[1804]: 2025-07-06 23:56:50.304 [INFO][5019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:50.393148 containerd[1804]: 2025-07-06 23:56:50.304 [INFO][5019] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-fe0535f741' Jul 6 23:56:50.393148 containerd[1804]: 2025-07-06 23:56:50.316 [INFO][5019] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.393148 containerd[1804]: 2025-07-06 23:56:50.322 [INFO][5019] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.393148 containerd[1804]: 2025-07-06 23:56:50.330 [INFO][5019] ipam/ipam.go 511: Trying affinity for 192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.393148 containerd[1804]: 2025-07-06 23:56:50.332 [INFO][5019] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.393148 containerd[1804]: 2025-07-06 23:56:50.335 [INFO][5019] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.393148 containerd[1804]: 2025-07-06 23:56:50.335 [INFO][5019] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.393148 containerd[1804]: 2025-07-06 23:56:50.336 [INFO][5019] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6 Jul 6 23:56:50.393148 containerd[1804]: 2025-07-06 23:56:50.342 [INFO][5019] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.393148 containerd[1804]: 2025-07-06 23:56:50.352 [INFO][5019] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.195/26] block=192.168.50.192/26 handle="k8s-pod-network.b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.393148 containerd[1804]: 2025-07-06 23:56:50.352 [INFO][5019] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.195/26] handle="k8s-pod-network.b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.393148 containerd[1804]: 2025-07-06 23:56:50.352 [INFO][5019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:50.393148 containerd[1804]: 2025-07-06 23:56:50.352 [INFO][5019] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.195/26] IPv6=[] ContainerID="b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6" HandleID="k8s-pod-network.b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0" Jul 6 23:56:50.395007 containerd[1804]: 2025-07-06 23:56:50.355 [INFO][4987] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6" Namespace="calico-apiserver" Pod="calico-apiserver-68646bbcb-2gm8s" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0", GenerateName:"calico-apiserver-68646bbcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"716cfedd-0158-4dcb-9ac1-1fdba73e9c13", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68646bbcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"", Pod:"calico-apiserver-68646bbcb-2gm8s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd99629f6b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:50.395007 containerd[1804]: 2025-07-06 23:56:50.355 [INFO][4987] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.195/32] ContainerID="b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6" Namespace="calico-apiserver" Pod="calico-apiserver-68646bbcb-2gm8s" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0" Jul 6 23:56:50.395007 containerd[1804]: 2025-07-06 23:56:50.355 [INFO][4987] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd99629f6b4 ContainerID="b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6" Namespace="calico-apiserver" Pod="calico-apiserver-68646bbcb-2gm8s" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0" Jul 6 23:56:50.395007 containerd[1804]: 2025-07-06 23:56:50.366 [INFO][4987] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6" Namespace="calico-apiserver" Pod="calico-apiserver-68646bbcb-2gm8s" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0" Jul 6 23:56:50.395007 containerd[1804]: 2025-07-06 23:56:50.369 [INFO][4987] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6" Namespace="calico-apiserver" Pod="calico-apiserver-68646bbcb-2gm8s" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0", GenerateName:"calico-apiserver-68646bbcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"716cfedd-0158-4dcb-9ac1-1fdba73e9c13", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68646bbcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6", Pod:"calico-apiserver-68646bbcb-2gm8s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd99629f6b4", MAC:"1e:88:e3:6b:61:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:50.395007 containerd[1804]: 2025-07-06 23:56:50.389 [INFO][4987] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6" Namespace="calico-apiserver" Pod="calico-apiserver-68646bbcb-2gm8s" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0" Jul 6 23:56:50.426078 containerd[1804]: time="2025-07-06T23:56:50.425322521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:50.426078 containerd[1804]: time="2025-07-06T23:56:50.425702322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:50.426078 containerd[1804]: time="2025-07-06T23:56:50.425733223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:50.427137 containerd[1804]: time="2025-07-06T23:56:50.426130824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:50.497307 systemd-networkd[1364]: calic7765db8f15: Link UP Jul 6 23:56:50.499160 systemd-networkd[1364]: calic7765db8f15: Gained carrier Jul 6 23:56:50.536140 containerd[1804]: 2025-07-06 23:56:50.274 [INFO][4997] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0 goldmane-58fd7646b9- calico-system 84b426be-d8f6-4a60-8c2e-1c346fd9da79 912 0 2025-07-06 23:56:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.4-a-fe0535f741 goldmane-58fd7646b9-xhs8b eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic7765db8f15 [] [] }} ContainerID="9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6" Namespace="calico-system" Pod="goldmane-58fd7646b9-xhs8b" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-" Jul 6 23:56:50.536140 containerd[1804]: 2025-07-06 23:56:50.274 [INFO][4997] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6" Namespace="calico-system" Pod="goldmane-58fd7646b9-xhs8b" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0" Jul 6 23:56:50.536140 containerd[1804]: 2025-07-06 23:56:50.335 [INFO][5030] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6" HandleID="k8s-pod-network.9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6" Workload="ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0" Jul 6 23:56:50.536140 containerd[1804]: 2025-07-06 23:56:50.336 [INFO][5030] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6" HandleID="k8s-pod-network.9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6" Workload="ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5750), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-fe0535f741", "pod":"goldmane-58fd7646b9-xhs8b", "timestamp":"2025-07-06 23:56:50.335860212 +0000 UTC"}, Hostname:"ci-4081.3.4-a-fe0535f741", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:50.536140 containerd[1804]: 2025-07-06 23:56:50.336 [INFO][5030] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:50.536140 containerd[1804]: 2025-07-06 23:56:50.352 [INFO][5030] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:50.536140 containerd[1804]: 2025-07-06 23:56:50.352 [INFO][5030] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-fe0535f741' Jul 6 23:56:50.536140 containerd[1804]: 2025-07-06 23:56:50.417 [INFO][5030] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.536140 containerd[1804]: 2025-07-06 23:56:50.426 [INFO][5030] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.536140 containerd[1804]: 2025-07-06 23:56:50.442 [INFO][5030] ipam/ipam.go 511: Trying affinity for 192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.536140 containerd[1804]: 2025-07-06 23:56:50.444 [INFO][5030] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.536140 containerd[1804]: 2025-07-06 23:56:50.450 [INFO][5030] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.536140 containerd[1804]: 2025-07-06 23:56:50.450 [INFO][5030] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.536140 containerd[1804]: 2025-07-06 23:56:50.459 [INFO][5030] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6 Jul 6 23:56:50.536140 containerd[1804]: 2025-07-06 23:56:50.471 [INFO][5030] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.536140 containerd[1804]: 2025-07-06 23:56:50.482 [INFO][5030] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.196/26] block=192.168.50.192/26 handle="k8s-pod-network.9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.536140 containerd[1804]: 2025-07-06 23:56:50.482 [INFO][5030] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.196/26] handle="k8s-pod-network.9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.536140 containerd[1804]: 2025-07-06 23:56:50.482 [INFO][5030] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:50.536140 containerd[1804]: 2025-07-06 23:56:50.482 [INFO][5030] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.196/26] IPv6=[] ContainerID="9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6" HandleID="k8s-pod-network.9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6" Workload="ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0" Jul 6 23:56:50.539004 containerd[1804]: 2025-07-06 23:56:50.485 [INFO][4997] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6" Namespace="calico-system" Pod="goldmane-58fd7646b9-xhs8b" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"84b426be-d8f6-4a60-8c2e-1c346fd9da79", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"", Pod:"goldmane-58fd7646b9-xhs8b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic7765db8f15", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:50.539004 containerd[1804]: 2025-07-06 23:56:50.485 [INFO][4997] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.196/32] ContainerID="9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6" Namespace="calico-system" Pod="goldmane-58fd7646b9-xhs8b" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0" Jul 6 23:56:50.539004 containerd[1804]: 2025-07-06 23:56:50.486 [INFO][4997] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7765db8f15 ContainerID="9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6" Namespace="calico-system" Pod="goldmane-58fd7646b9-xhs8b" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0" Jul 6 23:56:50.539004 containerd[1804]: 2025-07-06 23:56:50.501 [INFO][4997] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6" Namespace="calico-system" Pod="goldmane-58fd7646b9-xhs8b" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0" Jul 6 23:56:50.539004 containerd[1804]: 2025-07-06 23:56:50.502 [INFO][4997] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6" Namespace="calico-system" Pod="goldmane-58fd7646b9-xhs8b" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"84b426be-d8f6-4a60-8c2e-1c346fd9da79", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6", Pod:"goldmane-58fd7646b9-xhs8b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic7765db8f15", MAC:"82:84:65:e3:fb:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:50.539004 containerd[1804]: 2025-07-06 23:56:50.527 [INFO][4997] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6" Namespace="calico-system" Pod="goldmane-58fd7646b9-xhs8b" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0" Jul 6 23:56:50.564009 containerd[1804]: time="2025-07-06T23:56:50.562544347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68646bbcb-2gm8s,Uid:716cfedd-0158-4dcb-9ac1-1fdba73e9c13,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6\"" Jul 6 23:56:50.570656 systemd-networkd[1364]: cali2ff430243d8: Gained IPv6LL Jul 6 23:56:50.609131 containerd[1804]: time="2025-07-06T23:56:50.608837559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:50.610610 containerd[1804]: time="2025-07-06T23:56:50.610528466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:50.610865 containerd[1804]: time="2025-07-06T23:56:50.610750267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:50.611447 containerd[1804]: time="2025-07-06T23:56:50.611350470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:50.620445 systemd-networkd[1364]: caliebe28a49cf1: Link UP Jul 6 23:56:50.623449 systemd-networkd[1364]: caliebe28a49cf1: Gained carrier Jul 6 23:56:50.673548 containerd[1804]: 2025-07-06 23:56:50.276 [INFO][5007] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0 calico-kube-controllers-79f7f6c588- calico-system 0b182eb4-112d-494c-ad49-a4d43ae37b16 915 0 2025-07-06 23:56:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79f7f6c588 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.4-a-fe0535f741 calico-kube-controllers-79f7f6c588-x5rcf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliebe28a49cf1 [] [] }} ContainerID="1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66" Namespace="calico-system" Pod="calico-kube-controllers-79f7f6c588-x5rcf" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-" Jul 6 23:56:50.673548 containerd[1804]: 2025-07-06 23:56:50.276 [INFO][5007] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66" Namespace="calico-system" Pod="calico-kube-controllers-79f7f6c588-x5rcf" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0" Jul 6 23:56:50.673548 containerd[1804]: 2025-07-06 23:56:50.350 [INFO][5032] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66" HandleID="k8s-pod-network.1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0" Jul 6 23:56:50.673548 containerd[1804]: 2025-07-06 23:56:50.350 [INFO][5032] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66" HandleID="k8s-pod-network.1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334f70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-fe0535f741", "pod":"calico-kube-controllers-79f7f6c588-x5rcf", "timestamp":"2025-07-06 23:56:50.350321378 +0000 UTC"}, Hostname:"ci-4081.3.4-a-fe0535f741", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:50.673548 containerd[1804]: 2025-07-06 23:56:50.350 [INFO][5032] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:50.673548 containerd[1804]: 2025-07-06 23:56:50.482 [INFO][5032] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:50.673548 containerd[1804]: 2025-07-06 23:56:50.482 [INFO][5032] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-fe0535f741' Jul 6 23:56:50.673548 containerd[1804]: 2025-07-06 23:56:50.518 [INFO][5032] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.673548 containerd[1804]: 2025-07-06 23:56:50.535 [INFO][5032] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.673548 containerd[1804]: 2025-07-06 23:56:50.551 [INFO][5032] ipam/ipam.go 511: Trying affinity for 192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.673548 containerd[1804]: 2025-07-06 23:56:50.556 [INFO][5032] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.673548 containerd[1804]: 2025-07-06 23:56:50.561 [INFO][5032] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.673548 containerd[1804]: 2025-07-06 23:56:50.561 [INFO][5032] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.673548 containerd[1804]: 2025-07-06 23:56:50.566 [INFO][5032] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66 Jul 6 23:56:50.673548 containerd[1804]: 2025-07-06 23:56:50.577 [INFO][5032] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.673548 containerd[1804]: 2025-07-06 23:56:50.604 [INFO][5032] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.197/26] block=192.168.50.192/26 handle="k8s-pod-network.1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.673548 containerd[1804]: 2025-07-06 23:56:50.604 [INFO][5032] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.197/26] handle="k8s-pod-network.1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:50.673548 containerd[1804]: 2025-07-06 23:56:50.604 [INFO][5032] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:50.673548 containerd[1804]: 2025-07-06 23:56:50.604 [INFO][5032] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.197/26] IPv6=[] ContainerID="1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66" HandleID="k8s-pod-network.1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0" Jul 6 23:56:50.677110 containerd[1804]: 2025-07-06 23:56:50.610 [INFO][5007] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66" Namespace="calico-system" Pod="calico-kube-controllers-79f7f6c588-x5rcf" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0", GenerateName:"calico-kube-controllers-79f7f6c588-", Namespace:"calico-system", SelfLink:"", UID:"0b182eb4-112d-494c-ad49-a4d43ae37b16", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79f7f6c588", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"", Pod:"calico-kube-controllers-79f7f6c588-x5rcf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliebe28a49cf1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:50.677110 containerd[1804]: 2025-07-06 23:56:50.610 [INFO][5007] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.197/32] ContainerID="1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66" Namespace="calico-system" Pod="calico-kube-controllers-79f7f6c588-x5rcf" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0" Jul 6 23:56:50.677110 containerd[1804]: 2025-07-06 23:56:50.611 [INFO][5007] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliebe28a49cf1 ContainerID="1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66" Namespace="calico-system" Pod="calico-kube-controllers-79f7f6c588-x5rcf" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0" Jul 6 23:56:50.677110 containerd[1804]: 2025-07-06 23:56:50.645 [INFO][5007] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66" Namespace="calico-system" Pod="calico-kube-controllers-79f7f6c588-x5rcf" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0" Jul 6 23:56:50.677110 containerd[1804]: 2025-07-06 23:56:50.647 [INFO][5007] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66" Namespace="calico-system" Pod="calico-kube-controllers-79f7f6c588-x5rcf" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0", GenerateName:"calico-kube-controllers-79f7f6c588-", Namespace:"calico-system", SelfLink:"", UID:"0b182eb4-112d-494c-ad49-a4d43ae37b16", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79f7f6c588", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66", Pod:"calico-kube-controllers-79f7f6c588-x5rcf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliebe28a49cf1", MAC:"42:db:f6:18:81:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:50.677110 containerd[1804]: 2025-07-06 23:56:50.666 [INFO][5007] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66" Namespace="calico-system" Pod="calico-kube-controllers-79f7f6c588-x5rcf" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0" Jul 6 23:56:50.721060 containerd[1804]: time="2025-07-06T23:56:50.720737270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:50.721645 containerd[1804]: time="2025-07-06T23:56:50.721030371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:50.721645 containerd[1804]: time="2025-07-06T23:56:50.721220672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:50.722601 containerd[1804]: time="2025-07-06T23:56:50.722345577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:50.812532 containerd[1804]: time="2025-07-06T23:56:50.812404888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-xhs8b,Uid:84b426be-d8f6-4a60-8c2e-1c346fd9da79,Namespace:calico-system,Attempt:1,} returns sandbox id \"9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6\"" Jul 6 23:56:50.818088 containerd[1804]: time="2025-07-06T23:56:50.818045614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79f7f6c588-x5rcf,Uid:0b182eb4-112d-494c-ad49-a4d43ae37b16,Namespace:calico-system,Attempt:1,} returns sandbox id \"1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66\"" Jul 6 23:56:50.842336 kubelet[3292]: I0706 23:56:50.842289 3292 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:56:50.890305 systemd-networkd[1364]: vxlan.calico: Gained IPv6LL Jul 6 23:56:50.893783 containerd[1804]: time="2025-07-06T23:56:50.892569354Z" level=info msg="StopPodSandbox for \"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\"" Jul 6 23:56:51.041616 containerd[1804]: 2025-07-06 23:56:50.964 [INFO][5226] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Jul 6 23:56:51.041616 containerd[1804]: 2025-07-06 23:56:50.964 [INFO][5226] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" iface="eth0" netns="/var/run/netns/cni-8c0ae6b4-7b0a-b870-8fe3-1e8f6e098e9e" Jul 6 23:56:51.041616 containerd[1804]: 2025-07-06 23:56:50.965 [INFO][5226] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" iface="eth0" netns="/var/run/netns/cni-8c0ae6b4-7b0a-b870-8fe3-1e8f6e098e9e" Jul 6 23:56:51.041616 containerd[1804]: 2025-07-06 23:56:50.965 [INFO][5226] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" iface="eth0" netns="/var/run/netns/cni-8c0ae6b4-7b0a-b870-8fe3-1e8f6e098e9e" Jul 6 23:56:51.041616 containerd[1804]: 2025-07-06 23:56:50.965 [INFO][5226] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Jul 6 23:56:51.041616 containerd[1804]: 2025-07-06 23:56:50.965 [INFO][5226] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Jul 6 23:56:51.041616 containerd[1804]: 2025-07-06 23:56:51.019 [INFO][5234] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" HandleID="k8s-pod-network.a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Workload="ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0" Jul 6 23:56:51.041616 containerd[1804]: 2025-07-06 23:56:51.020 [INFO][5234] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:51.041616 containerd[1804]: 2025-07-06 23:56:51.021 [INFO][5234] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:51.041616 containerd[1804]: 2025-07-06 23:56:51.032 [WARNING][5234] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" HandleID="k8s-pod-network.a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Workload="ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0" Jul 6 23:56:51.041616 containerd[1804]: 2025-07-06 23:56:51.032 [INFO][5234] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" HandleID="k8s-pod-network.a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Workload="ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0" Jul 6 23:56:51.041616 containerd[1804]: 2025-07-06 23:56:51.034 [INFO][5234] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:51.041616 containerd[1804]: 2025-07-06 23:56:51.037 [INFO][5226] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Jul 6 23:56:51.041616 containerd[1804]: time="2025-07-06T23:56:51.041106133Z" level=info msg="TearDown network for sandbox \"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\" successfully" Jul 6 23:56:51.041616 containerd[1804]: time="2025-07-06T23:56:51.041235333Z" level=info msg="StopPodSandbox for \"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\" returns successfully" Jul 6 23:56:51.042789 containerd[1804]: time="2025-07-06T23:56:51.042228938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q2vkj,Uid:d05fe5f5-a0d0-4818-841f-97f17bafd42f,Namespace:calico-system,Attempt:1,}" Jul 6 23:56:51.222351 systemd-networkd[1364]: cali3923dbc8ec6: Link UP Jul 6 23:56:51.224989 systemd-networkd[1364]: cali3923dbc8ec6: Gained carrier Jul 6 23:56:51.246412 containerd[1804]: 2025-07-06 23:56:51.133 [INFO][5266] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0 csi-node-driver- calico-system d05fe5f5-a0d0-4818-841f-97f17bafd42f 931 0 2025-07-06 23:56:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.4-a-fe0535f741 csi-node-driver-q2vkj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3923dbc8ec6 [] [] }} ContainerID="399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b" Namespace="calico-system" Pod="csi-node-driver-q2vkj" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-" Jul 6 23:56:51.246412 containerd[1804]: 2025-07-06 23:56:51.134 [INFO][5266] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b" Namespace="calico-system" Pod="csi-node-driver-q2vkj" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0" Jul 6 23:56:51.246412 containerd[1804]: 2025-07-06 23:56:51.161 [INFO][5278] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b" HandleID="k8s-pod-network.399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b" Workload="ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0" Jul 6 23:56:51.246412 containerd[1804]: 2025-07-06 23:56:51.161 [INFO][5278] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b" HandleID="k8s-pod-network.399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b" Workload="ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-fe0535f741", "pod":"csi-node-driver-q2vkj", "timestamp":"2025-07-06 23:56:51.161701283 +0000 UTC"}, Hostname:"ci-4081.3.4-a-fe0535f741", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:51.246412 containerd[1804]: 2025-07-06 23:56:51.161 [INFO][5278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:51.246412 containerd[1804]: 2025-07-06 23:56:51.162 [INFO][5278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:51.246412 containerd[1804]: 2025-07-06 23:56:51.162 [INFO][5278] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-fe0535f741' Jul 6 23:56:51.246412 containerd[1804]: 2025-07-06 23:56:51.169 [INFO][5278] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:51.246412 containerd[1804]: 2025-07-06 23:56:51.175 [INFO][5278] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:51.246412 containerd[1804]: 2025-07-06 23:56:51.179 [INFO][5278] ipam/ipam.go 511: Trying affinity for 192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:51.246412 containerd[1804]: 2025-07-06 23:56:51.181 [INFO][5278] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:51.246412 containerd[1804]: 2025-07-06 23:56:51.185 [INFO][5278] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:51.246412 containerd[1804]: 2025-07-06 23:56:51.185 [INFO][5278] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:51.246412 containerd[1804]: 2025-07-06 23:56:51.189 [INFO][5278] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b Jul 6 23:56:51.246412 containerd[1804]: 2025-07-06 23:56:51.196 [INFO][5278] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:51.246412 containerd[1804]: 2025-07-06 23:56:51.212 [INFO][5278] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.198/26] block=192.168.50.192/26 handle="k8s-pod-network.399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:51.246412 containerd[1804]: 2025-07-06 23:56:51.212 [INFO][5278] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.198/26] handle="k8s-pod-network.399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:51.246412 containerd[1804]: 2025-07-06 23:56:51.212 [INFO][5278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:51.246412 containerd[1804]: 2025-07-06 23:56:51.212 [INFO][5278] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.198/26] IPv6=[] ContainerID="399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b" HandleID="k8s-pod-network.399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b" Workload="ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0" Jul 6 23:56:51.249896 containerd[1804]: 2025-07-06 23:56:51.218 [INFO][5266] cni-plugin/k8s.go 418: Populated endpoint ContainerID="399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b" Namespace="calico-system" Pod="csi-node-driver-q2vkj" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d05fe5f5-a0d0-4818-841f-97f17bafd42f", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"", Pod:"csi-node-driver-q2vkj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3923dbc8ec6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:51.249896 containerd[1804]: 2025-07-06 23:56:51.218 [INFO][5266] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.198/32] ContainerID="399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b" Namespace="calico-system" Pod="csi-node-driver-q2vkj" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0" Jul 6 23:56:51.249896 containerd[1804]: 2025-07-06 23:56:51.218 [INFO][5266] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3923dbc8ec6 ContainerID="399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b" Namespace="calico-system" Pod="csi-node-driver-q2vkj" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0" Jul 6 23:56:51.249896 containerd[1804]: 2025-07-06 23:56:51.225 [INFO][5266] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b" Namespace="calico-system" Pod="csi-node-driver-q2vkj" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0" Jul 6 23:56:51.249896 containerd[1804]: 2025-07-06 23:56:51.226 [INFO][5266] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b" Namespace="calico-system" Pod="csi-node-driver-q2vkj" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d05fe5f5-a0d0-4818-841f-97f17bafd42f", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b", Pod:"csi-node-driver-q2vkj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3923dbc8ec6", MAC:"3a:7f:f9:19:b1:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:51.249896 containerd[1804]: 2025-07-06 23:56:51.242 [INFO][5266] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b" Namespace="calico-system" Pod="csi-node-driver-q2vkj" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0" Jul 6 23:56:51.282526 containerd[1804]: time="2025-07-06T23:56:51.281753131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:51.283900 containerd[1804]: time="2025-07-06T23:56:51.283845641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:51.283900 containerd[1804]: time="2025-07-06T23:56:51.283873141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:51.285278 containerd[1804]: time="2025-07-06T23:56:51.285236947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:51.381808 systemd[1]: run-netns-cni\x2d8c0ae6b4\x2d7b0a\x2db870\x2d8fe3\x2d1e8f6e098e9e.mount: Deactivated successfully. Jul 6 23:56:51.389706 containerd[1804]: time="2025-07-06T23:56:51.388922621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q2vkj,Uid:d05fe5f5-a0d0-4818-841f-97f17bafd42f,Namespace:calico-system,Attempt:1,} returns sandbox id \"399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b\"" Jul 6 23:56:51.657881 systemd-networkd[1364]: calibd99629f6b4: Gained IPv6LL Jul 6 23:56:51.889257 containerd[1804]: time="2025-07-06T23:56:51.889066105Z" level=info msg="StopPodSandbox for \"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\"" Jul 6 23:56:51.914895 systemd-networkd[1364]: caliebe28a49cf1: Gained IPv6LL Jul 6 23:56:51.984778 containerd[1804]: 2025-07-06 23:56:51.945 [INFO][5343] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Jul 6 23:56:51.984778 containerd[1804]: 2025-07-06 23:56:51.946 [INFO][5343] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" iface="eth0" netns="/var/run/netns/cni-a6429bb3-769a-1463-2bed-e8cc8be0b181" Jul 6 23:56:51.984778 containerd[1804]: 2025-07-06 23:56:51.947 [INFO][5343] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" iface="eth0" netns="/var/run/netns/cni-a6429bb3-769a-1463-2bed-e8cc8be0b181" Jul 6 23:56:51.984778 containerd[1804]: 2025-07-06 23:56:51.947 [INFO][5343] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" iface="eth0" netns="/var/run/netns/cni-a6429bb3-769a-1463-2bed-e8cc8be0b181" Jul 6 23:56:51.984778 containerd[1804]: 2025-07-06 23:56:51.947 [INFO][5343] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Jul 6 23:56:51.984778 containerd[1804]: 2025-07-06 23:56:51.947 [INFO][5343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Jul 6 23:56:51.984778 containerd[1804]: 2025-07-06 23:56:51.973 [INFO][5350] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" HandleID="k8s-pod-network.8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0" Jul 6 23:56:51.984778 containerd[1804]: 2025-07-06 23:56:51.973 [INFO][5350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:51.984778 containerd[1804]: 2025-07-06 23:56:51.973 [INFO][5350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:51.984778 containerd[1804]: 2025-07-06 23:56:51.980 [WARNING][5350] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" HandleID="k8s-pod-network.8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0" Jul 6 23:56:51.984778 containerd[1804]: 2025-07-06 23:56:51.980 [INFO][5350] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" HandleID="k8s-pod-network.8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0" Jul 6 23:56:51.984778 containerd[1804]: 2025-07-06 23:56:51.982 [INFO][5350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:51.984778 containerd[1804]: 2025-07-06 23:56:51.983 [INFO][5343] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Jul 6 23:56:51.985895 containerd[1804]: time="2025-07-06T23:56:51.985140543Z" level=info msg="TearDown network for sandbox \"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\" successfully" Jul 6 23:56:51.985895 containerd[1804]: time="2025-07-06T23:56:51.985179144Z" level=info msg="StopPodSandbox for \"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\" returns successfully" Jul 6 23:56:51.992059 containerd[1804]: time="2025-07-06T23:56:51.987915156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pjxzj,Uid:213a78c2-eb8b-4530-9913-02f60715b4f4,Namespace:kube-system,Attempt:1,}" Jul 6 23:56:51.992765 systemd[1]: run-netns-cni\x2da6429bb3\x2d769a\x2d1463\x2d2bed\x2de8cc8be0b181.mount: Deactivated successfully. Jul 6 23:56:52.361708 systemd-networkd[1364]: cali3923dbc8ec6: Gained IPv6LL Jul 6 23:56:52.541397 systemd-networkd[1364]: cali0e02abf52cf: Link UP Jul 6 23:56:52.541670 systemd-networkd[1364]: cali0e02abf52cf: Gained carrier Jul 6 23:56:52.553664 systemd-networkd[1364]: calic7765db8f15: Gained IPv6LL Jul 6 23:56:52.577499 containerd[1804]: 2025-07-06 23:56:52.435 [INFO][5360] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0 coredns-7c65d6cfc9- kube-system 213a78c2-eb8b-4530-9913-02f60715b4f4 942 0 2025-07-06 23:56:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-a-fe0535f741 coredns-7c65d6cfc9-pjxzj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0e02abf52cf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pjxzj" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-" Jul 6 23:56:52.577499 containerd[1804]: 2025-07-06 23:56:52.435 [INFO][5360] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pjxzj" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0" Jul 6 23:56:52.577499 containerd[1804]: 2025-07-06 23:56:52.482 [INFO][5374] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0" HandleID="k8s-pod-network.4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0" Jul 6 23:56:52.577499 containerd[1804]: 2025-07-06 23:56:52.482 [INFO][5374] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0" HandleID="k8s-pod-network.4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5950), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-a-fe0535f741", "pod":"coredns-7c65d6cfc9-pjxzj", "timestamp":"2025-07-06 23:56:52.482144913 +0000 UTC"}, Hostname:"ci-4081.3.4-a-fe0535f741", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:52.577499 containerd[1804]: 2025-07-06 23:56:52.482 [INFO][5374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:52.577499 containerd[1804]: 2025-07-06 23:56:52.482 [INFO][5374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:52.577499 containerd[1804]: 2025-07-06 23:56:52.482 [INFO][5374] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-fe0535f741' Jul 6 23:56:52.577499 containerd[1804]: 2025-07-06 23:56:52.492 [INFO][5374] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:52.577499 containerd[1804]: 2025-07-06 23:56:52.497 [INFO][5374] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:52.577499 containerd[1804]: 2025-07-06 23:56:52.503 [INFO][5374] ipam/ipam.go 511: Trying affinity for 192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:52.577499 containerd[1804]: 2025-07-06 23:56:52.505 [INFO][5374] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:52.577499 containerd[1804]: 2025-07-06 23:56:52.508 [INFO][5374] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:52.577499 containerd[1804]: 2025-07-06 23:56:52.509 [INFO][5374] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:52.577499 containerd[1804]: 2025-07-06 23:56:52.510 [INFO][5374] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0 Jul 6 23:56:52.577499 containerd[1804]: 2025-07-06 23:56:52.519 [INFO][5374] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:52.577499 containerd[1804]: 2025-07-06 23:56:52.530 [INFO][5374] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.199/26] block=192.168.50.192/26 handle="k8s-pod-network.4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:52.577499 containerd[1804]: 2025-07-06 23:56:52.530 [INFO][5374] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.199/26] handle="k8s-pod-network.4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:52.577499 containerd[1804]: 2025-07-06 23:56:52.530 [INFO][5374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:52.577499 containerd[1804]: 2025-07-06 23:56:52.530 [INFO][5374] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.199/26] IPv6=[] ContainerID="4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0" HandleID="k8s-pod-network.4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0" Jul 6 23:56:52.578473 containerd[1804]: 2025-07-06 23:56:52.532 [INFO][5360] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pjxzj" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"213a78c2-eb8b-4530-9913-02f60715b4f4", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"", Pod:"coredns-7c65d6cfc9-pjxzj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e02abf52cf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:52.578473 containerd[1804]: 2025-07-06 23:56:52.533 [INFO][5360] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.199/32] ContainerID="4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pjxzj" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0" Jul 6 23:56:52.578473 containerd[1804]: 2025-07-06 23:56:52.534 [INFO][5360] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e02abf52cf ContainerID="4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pjxzj" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0" Jul 6 23:56:52.578473 containerd[1804]: 2025-07-06 23:56:52.544 [INFO][5360] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pjxzj" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0" Jul 6 23:56:52.578473 containerd[1804]: 2025-07-06 23:56:52.546 [INFO][5360] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pjxzj" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"213a78c2-eb8b-4530-9913-02f60715b4f4", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0", Pod:"coredns-7c65d6cfc9-pjxzj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e02abf52cf", MAC:"92:42:94:67:03:59", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:52.578473 containerd[1804]: 2025-07-06 23:56:52.575 [INFO][5360] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pjxzj" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0" Jul 6 23:56:52.626610 containerd[1804]: time="2025-07-06T23:56:52.625694668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:52.626610 containerd[1804]: time="2025-07-06T23:56:52.625763869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:52.627945 containerd[1804]: time="2025-07-06T23:56:52.627681478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:52.627945 containerd[1804]: time="2025-07-06T23:56:52.627800178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:52.721810 containerd[1804]: time="2025-07-06T23:56:52.721095804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pjxzj,Uid:213a78c2-eb8b-4530-9913-02f60715b4f4,Namespace:kube-system,Attempt:1,} returns sandbox id \"4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0\"" Jul 6 23:56:52.731703 containerd[1804]: time="2025-07-06T23:56:52.731547152Z" level=info msg="CreateContainer within sandbox \"4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:56:52.768154 containerd[1804]: time="2025-07-06T23:56:52.767317315Z" level=info msg="CreateContainer within sandbox \"4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"31360df7e48f015887d614ee96858eabf47aa7be92014fa7371556b2f00af517\"" Jul 6 23:56:52.770195 containerd[1804]: time="2025-07-06T23:56:52.770161328Z" level=info msg="StartContainer for \"31360df7e48f015887d614ee96858eabf47aa7be92014fa7371556b2f00af517\"" Jul 6 23:56:52.870161 containerd[1804]: time="2025-07-06T23:56:52.869865683Z" level=info msg="StartContainer for \"31360df7e48f015887d614ee96858eabf47aa7be92014fa7371556b2f00af517\" returns successfully" Jul 6 23:56:52.889631 containerd[1804]: time="2025-07-06T23:56:52.888846370Z" level=info msg="StopPodSandbox for \"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\"" Jul 6 23:56:53.182056 containerd[1804]: 2025-07-06 23:56:53.035 [INFO][5477] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Jul 6 23:56:53.182056 containerd[1804]: 2025-07-06 23:56:53.036 [INFO][5477] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" iface="eth0" netns="/var/run/netns/cni-35650d4f-a4ab-b667-8eb1-9d2915e42df2" Jul 6 23:56:53.182056 containerd[1804]: 2025-07-06 23:56:53.038 [INFO][5477] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" iface="eth0" netns="/var/run/netns/cni-35650d4f-a4ab-b667-8eb1-9d2915e42df2" Jul 6 23:56:53.182056 containerd[1804]: 2025-07-06 23:56:53.039 [INFO][5477] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" iface="eth0" netns="/var/run/netns/cni-35650d4f-a4ab-b667-8eb1-9d2915e42df2" Jul 6 23:56:53.182056 containerd[1804]: 2025-07-06 23:56:53.039 [INFO][5477] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Jul 6 23:56:53.182056 containerd[1804]: 2025-07-06 23:56:53.039 [INFO][5477] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Jul 6 23:56:53.182056 containerd[1804]: 2025-07-06 23:56:53.147 [INFO][5484] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" HandleID="k8s-pod-network.33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0" Jul 6 23:56:53.182056 containerd[1804]: 2025-07-06 23:56:53.149 [INFO][5484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:53.182056 containerd[1804]: 2025-07-06 23:56:53.150 [INFO][5484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:53.182056 containerd[1804]: 2025-07-06 23:56:53.168 [WARNING][5484] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" HandleID="k8s-pod-network.33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0" Jul 6 23:56:53.182056 containerd[1804]: 2025-07-06 23:56:53.168 [INFO][5484] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" HandleID="k8s-pod-network.33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0" Jul 6 23:56:53.182056 containerd[1804]: 2025-07-06 23:56:53.172 [INFO][5484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:53.182056 containerd[1804]: 2025-07-06 23:56:53.177 [INFO][5477] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Jul 6 23:56:53.183015 containerd[1804]: time="2025-07-06T23:56:53.182231410Z" level=info msg="TearDown network for sandbox \"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\" successfully" Jul 6 23:56:53.183015 containerd[1804]: time="2025-07-06T23:56:53.182271110Z" level=info msg="StopPodSandbox for \"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\" returns successfully" Jul 6 23:56:53.186165 containerd[1804]: time="2025-07-06T23:56:53.183741917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nh8m8,Uid:58666ff6-c819-4067-ae41-b5a4a7ab70fc,Namespace:kube-system,Attempt:1,}" Jul 6 23:56:53.282919 kubelet[3292]: I0706 23:56:53.282463 3292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-pjxzj" podStartSLOduration=44.282435667 podStartE2EDuration="44.282435667s" podCreationTimestamp="2025-07-06 23:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:53.280304358 +0000 UTC m=+50.036636099" watchObservedRunningTime="2025-07-06 23:56:53.282435667 +0000 UTC m=+50.038767508" Jul 6 23:56:53.379417 systemd[1]: run-netns-cni\x2d35650d4f\x2da4ab\x2db667\x2d8eb1\x2d9d2915e42df2.mount: Deactivated successfully. Jul 6 23:56:53.577297 systemd-networkd[1364]: cali0e02abf52cf: Gained IPv6LL Jul 6 23:56:53.649852 systemd-networkd[1364]: cali6517bf13166: Link UP Jul 6 23:56:53.654502 systemd-networkd[1364]: cali6517bf13166: Gained carrier Jul 6 23:56:53.712262 containerd[1804]: 2025-07-06 23:56:53.424 [INFO][5490] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0 coredns-7c65d6cfc9- kube-system 58666ff6-c819-4067-ae41-b5a4a7ab70fc 953 0 2025-07-06 23:56:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-a-fe0535f741 coredns-7c65d6cfc9-nh8m8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6517bf13166 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nh8m8" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-" Jul 6 23:56:53.712262 containerd[1804]: 2025-07-06 23:56:53.425 [INFO][5490] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nh8m8" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0" Jul 6 23:56:53.712262 containerd[1804]: 2025-07-06 23:56:53.533 [INFO][5507] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331" HandleID="k8s-pod-network.a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0" Jul 6 23:56:53.712262 containerd[1804]: 2025-07-06 23:56:53.533 [INFO][5507] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331" HandleID="k8s-pod-network.a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e4a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-a-fe0535f741", "pod":"coredns-7c65d6cfc9-nh8m8", "timestamp":"2025-07-06 23:56:53.533245113 +0000 UTC"}, Hostname:"ci-4081.3.4-a-fe0535f741", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:53.712262 containerd[1804]: 2025-07-06 23:56:53.533 [INFO][5507] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:53.712262 containerd[1804]: 2025-07-06 23:56:53.533 [INFO][5507] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:53.712262 containerd[1804]: 2025-07-06 23:56:53.533 [INFO][5507] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-fe0535f741' Jul 6 23:56:53.712262 containerd[1804]: 2025-07-06 23:56:53.557 [INFO][5507] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:53.712262 containerd[1804]: 2025-07-06 23:56:53.563 [INFO][5507] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:53.712262 containerd[1804]: 2025-07-06 23:56:53.572 [INFO][5507] ipam/ipam.go 511: Trying affinity for 192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:53.712262 containerd[1804]: 2025-07-06 23:56:53.575 [INFO][5507] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:53.712262 containerd[1804]: 2025-07-06 23:56:53.584 [INFO][5507] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:53.712262 containerd[1804]: 2025-07-06 23:56:53.584 [INFO][5507] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:53.712262 containerd[1804]: 2025-07-06 23:56:53.588 [INFO][5507] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331 Jul 6 23:56:53.712262 containerd[1804]: 2025-07-06 23:56:53.601 [INFO][5507] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:53.712262 containerd[1804]: 2025-07-06 23:56:53.627 [INFO][5507] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.200/26] block=192.168.50.192/26 handle="k8s-pod-network.a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:53.712262 containerd[1804]: 2025-07-06 23:56:53.629 [INFO][5507] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.200/26] handle="k8s-pod-network.a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331" host="ci-4081.3.4-a-fe0535f741" Jul 6 23:56:53.712262 containerd[1804]: 2025-07-06 23:56:53.629 [INFO][5507] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:53.712262 containerd[1804]: 2025-07-06 23:56:53.629 [INFO][5507] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.200/26] IPv6=[] ContainerID="a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331" HandleID="k8s-pod-network.a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0" Jul 6 23:56:53.716560 containerd[1804]: 2025-07-06 23:56:53.635 [INFO][5490] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nh8m8" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"58666ff6-c819-4067-ae41-b5a4a7ab70fc", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"", Pod:"coredns-7c65d6cfc9-nh8m8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6517bf13166", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:53.716560 containerd[1804]: 2025-07-06 23:56:53.636 [INFO][5490] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.200/32] ContainerID="a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nh8m8" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0" Jul 6 23:56:53.716560 containerd[1804]: 2025-07-06 23:56:53.636 [INFO][5490] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6517bf13166 ContainerID="a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nh8m8" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0" Jul 6 23:56:53.716560 containerd[1804]: 2025-07-06 23:56:53.658 [INFO][5490] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nh8m8" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0" Jul 6 23:56:53.716560 containerd[1804]: 2025-07-06 23:56:53.662 [INFO][5490] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nh8m8" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"58666ff6-c819-4067-ae41-b5a4a7ab70fc", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331", Pod:"coredns-7c65d6cfc9-nh8m8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6517bf13166", MAC:"da:5a:3a:0f:c0:8b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:53.716560 containerd[1804]: 2025-07-06 23:56:53.698 [INFO][5490] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nh8m8" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0" Jul 6 23:56:53.813907 containerd[1804]: time="2025-07-06T23:56:53.813325892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:53.814779 containerd[1804]: time="2025-07-06T23:56:53.814344696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:53.816043 containerd[1804]: time="2025-07-06T23:56:53.815981604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:53.816490 containerd[1804]: time="2025-07-06T23:56:53.816293305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:53.981988 containerd[1804]: time="2025-07-06T23:56:53.981924162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nh8m8,Uid:58666ff6-c819-4067-ae41-b5a4a7ab70fc,Namespace:kube-system,Attempt:1,} returns sandbox id \"a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331\"" Jul 6 23:56:53.994146 containerd[1804]: time="2025-07-06T23:56:53.992421810Z" level=info msg="CreateContainer within sandbox \"a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:56:54.050967 containerd[1804]: time="2025-07-06T23:56:54.050915877Z" level=info msg="CreateContainer within sandbox \"a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d3b9dd7df62f902458fa428d77a09d5288d5a54401b5a7d0c2a1c2f0a305f0b7\"" Jul 6 23:56:54.054294 containerd[1804]: time="2025-07-06T23:56:54.054258092Z" level=info msg="StartContainer for \"d3b9dd7df62f902458fa428d77a09d5288d5a54401b5a7d0c2a1c2f0a305f0b7\"" Jul 6 23:56:54.293350 containerd[1804]: time="2025-07-06T23:56:54.292409479Z" level=info msg="StartContainer for \"d3b9dd7df62f902458fa428d77a09d5288d5a54401b5a7d0c2a1c2f0a305f0b7\" returns successfully" Jul 6 23:56:54.730275 systemd-networkd[1364]: cali6517bf13166: Gained IPv6LL Jul 6 23:56:54.842465 containerd[1804]: time="2025-07-06T23:56:54.842385291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:54.848225 containerd[1804]: time="2025-07-06T23:56:54.848157117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 6 23:56:54.853169 containerd[1804]: time="2025-07-06T23:56:54.851770834Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:54.863135 containerd[1804]: time="2025-07-06T23:56:54.861296177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:54.863634 containerd[1804]: time="2025-07-06T23:56:54.863359487Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 5.078472491s" Jul 6 23:56:54.863790 containerd[1804]: time="2025-07-06T23:56:54.863741288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 6 23:56:54.869947 containerd[1804]: time="2025-07-06T23:56:54.869347314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 6 23:56:54.870255 containerd[1804]: time="2025-07-06T23:56:54.870222118Z" level=info msg="CreateContainer within sandbox \"e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:56:54.916840 containerd[1804]: time="2025-07-06T23:56:54.916721830Z" level=info msg="CreateContainer within sandbox \"e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"83027f09713daaff6806f7c2a5831d2efd7f08fdc0afbb2509315aad2977ccb4\"" Jul 6 23:56:54.919369 containerd[1804]: time="2025-07-06T23:56:54.919333842Z" level=info msg="StartContainer for \"83027f09713daaff6806f7c2a5831d2efd7f08fdc0afbb2509315aad2977ccb4\"" Jul 6 23:56:55.110141 containerd[1804]: time="2025-07-06T23:56:55.109950613Z" level=info msg="StartContainer for \"83027f09713daaff6806f7c2a5831d2efd7f08fdc0afbb2509315aad2977ccb4\" returns successfully" Jul 6 23:56:55.324074 kubelet[3292]: I0706 23:56:55.323300 3292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68646bbcb-rvxtr" podStartSLOduration=31.93665829 podStartE2EDuration="37.323275587s" podCreationTimestamp="2025-07-06 23:56:18 +0000 UTC" firstStartedPulling="2025-07-06 23:56:49.479699303 +0000 UTC m=+46.236031144" lastFinishedPulling="2025-07-06 23:56:54.8663167 +0000 UTC m=+51.622648441" observedRunningTime="2025-07-06 23:56:55.29995748 +0000 UTC m=+52.056289321" watchObservedRunningTime="2025-07-06 23:56:55.323275587 +0000 UTC m=+52.079607328" Jul 6 23:56:55.329146 kubelet[3292]: I0706 23:56:55.323582 3292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-nh8m8" podStartSLOduration=46.323570088 podStartE2EDuration="46.323570088s" podCreationTimestamp="2025-07-06 23:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:55.321356678 +0000 UTC m=+52.077688519" watchObservedRunningTime="2025-07-06 23:56:55.323570088 +0000 UTC m=+52.079901829" Jul 6 23:56:58.355064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount918139092.mount: Deactivated successfully. Jul 6 23:56:58.428770 containerd[1804]: time="2025-07-06T23:56:58.428703531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:58.435214 containerd[1804]: time="2025-07-06T23:56:58.435151562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 6 23:56:58.443144 containerd[1804]: time="2025-07-06T23:56:58.440867690Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:58.462800 containerd[1804]: time="2025-07-06T23:56:58.462747898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:58.463884 containerd[1804]: time="2025-07-06T23:56:58.463841703Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.593765186s" Jul 6 23:56:58.463978 containerd[1804]: time="2025-07-06T23:56:58.463889603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 6 23:56:58.470924 containerd[1804]: time="2025-07-06T23:56:58.468738027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:56:58.471984 containerd[1804]: time="2025-07-06T23:56:58.471954043Z" level=info msg="CreateContainer within sandbox \"a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 6 23:56:58.522846 containerd[1804]: time="2025-07-06T23:56:58.522787693Z" level=info msg="CreateContainer within sandbox \"a34355d79180ff43d4740cec42c0c33c81e25a1f5b390d3e4f318863e9428410\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"01237a54c66ebe246bd408e9e3f8bd894df8f5702034b9165ef416af824aa8d4\"" Jul 6 23:56:58.527203 containerd[1804]: time="2025-07-06T23:56:58.524368300Z" level=info msg="StartContainer for \"01237a54c66ebe246bd408e9e3f8bd894df8f5702034b9165ef416af824aa8d4\"" Jul 6 23:56:58.662062 containerd[1804]: time="2025-07-06T23:56:58.661941576Z" level=info msg="StartContainer for \"01237a54c66ebe246bd408e9e3f8bd894df8f5702034b9165ef416af824aa8d4\" returns successfully" Jul 6 23:56:58.789150 containerd[1804]: time="2025-07-06T23:56:58.788969499Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:58.797144 containerd[1804]: time="2025-07-06T23:56:58.797078039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 6 23:56:58.799770 containerd[1804]: time="2025-07-06T23:56:58.799565851Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 330.789824ms" Jul 6 23:56:58.799770 containerd[1804]: time="2025-07-06T23:56:58.799616851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 6 23:56:58.801411 containerd[1804]: time="2025-07-06T23:56:58.801380760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 6 23:56:58.802988 containerd[1804]: time="2025-07-06T23:56:58.802952468Z" level=info msg="CreateContainer within sandbox \"b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:56:58.856131 containerd[1804]: time="2025-07-06T23:56:58.856074329Z" level=info msg="CreateContainer within sandbox \"b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"576fff87955aaa20b2b07598ddc263e97420a3c014b473d9df3c7a655e384683\"" Jul 6 23:56:58.858405 containerd[1804]: time="2025-07-06T23:56:58.856931133Z" level=info msg="StartContainer for \"576fff87955aaa20b2b07598ddc263e97420a3c014b473d9df3c7a655e384683\"" Jul 6 23:56:59.020135 containerd[1804]: time="2025-07-06T23:56:59.019987433Z" level=info msg="StartContainer for \"576fff87955aaa20b2b07598ddc263e97420a3c014b473d9df3c7a655e384683\" returns successfully" Jul 6 23:56:59.371564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2483891461.mount: Deactivated successfully. Jul 6 23:56:59.441730 kubelet[3292]: I0706 23:56:59.439991 3292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68646bbcb-2gm8s" podStartSLOduration=33.207667512 podStartE2EDuration="41.439964395s" podCreationTimestamp="2025-07-06 23:56:18 +0000 UTC" firstStartedPulling="2025-07-06 23:56:50.568480074 +0000 UTC m=+47.324811815" lastFinishedPulling="2025-07-06 23:56:58.800776957 +0000 UTC m=+55.557108698" observedRunningTime="2025-07-06 23:56:59.387474637 +0000 UTC m=+56.143806478" watchObservedRunningTime="2025-07-06 23:56:59.439964395 +0000 UTC m=+56.196296136" Jul 6 23:57:00.415190 kubelet[3292]: I0706 23:57:00.412596 3292 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:57:02.356026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444557944.mount: Deactivated successfully. Jul 6 23:57:03.910880 containerd[1804]: time="2025-07-06T23:57:03.910621007Z" level=info msg="StopPodSandbox for \"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\"" Jul 6 23:57:04.134534 containerd[1804]: 2025-07-06 23:57:04.053 [WARNING][5773] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"213a78c2-eb8b-4530-9913-02f60715b4f4", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0", Pod:"coredns-7c65d6cfc9-pjxzj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e02abf52cf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:04.134534 containerd[1804]: 2025-07-06 23:57:04.053 [INFO][5773] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Jul 6 23:57:04.134534 containerd[1804]: 2025-07-06 23:57:04.053 [INFO][5773] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" iface="eth0" netns="" Jul 6 23:57:04.134534 containerd[1804]: 2025-07-06 23:57:04.053 [INFO][5773] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Jul 6 23:57:04.134534 containerd[1804]: 2025-07-06 23:57:04.053 [INFO][5773] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Jul 6 23:57:04.134534 containerd[1804]: 2025-07-06 23:57:04.100 [INFO][5782] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" HandleID="k8s-pod-network.8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0" Jul 6 23:57:04.134534 containerd[1804]: 2025-07-06 23:57:04.121 [INFO][5782] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:04.134534 containerd[1804]: 2025-07-06 23:57:04.121 [INFO][5782] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:04.134534 containerd[1804]: 2025-07-06 23:57:04.129 [WARNING][5782] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" HandleID="k8s-pod-network.8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0" Jul 6 23:57:04.134534 containerd[1804]: 2025-07-06 23:57:04.129 [INFO][5782] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" HandleID="k8s-pod-network.8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0" Jul 6 23:57:04.134534 containerd[1804]: 2025-07-06 23:57:04.130 [INFO][5782] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:04.134534 containerd[1804]: 2025-07-06 23:57:04.132 [INFO][5773] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Jul 6 23:57:04.135780 containerd[1804]: time="2025-07-06T23:57:04.134592133Z" level=info msg="TearDown network for sandbox \"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\" successfully" Jul 6 23:57:04.135780 containerd[1804]: time="2025-07-06T23:57:04.134629933Z" level=info msg="StopPodSandbox for \"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\" returns successfully" Jul 6 23:57:04.136324 containerd[1804]: time="2025-07-06T23:57:04.136288440Z" level=info msg="RemovePodSandbox for \"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\"" Jul 6 23:57:04.136427 containerd[1804]: time="2025-07-06T23:57:04.136333540Z" level=info msg="Forcibly stopping sandbox \"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\"" Jul 6 23:57:04.294284 containerd[1804]: 2025-07-06 23:57:04.183 [WARNING][5797] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"213a78c2-eb8b-4530-9913-02f60715b4f4", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"4aa0408942e30a8c35362ab9055feec0e68c9f9a7c5c067556bef2bdcf8d6fa0", Pod:"coredns-7c65d6cfc9-pjxzj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e02abf52cf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:04.294284 containerd[1804]: 2025-07-06 23:57:04.183 [INFO][5797] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Jul 6 23:57:04.294284 containerd[1804]: 2025-07-06 23:57:04.183 [INFO][5797] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" iface="eth0" netns="" Jul 6 23:57:04.294284 containerd[1804]: 2025-07-06 23:57:04.183 [INFO][5797] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Jul 6 23:57:04.294284 containerd[1804]: 2025-07-06 23:57:04.183 [INFO][5797] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Jul 6 23:57:04.294284 containerd[1804]: 2025-07-06 23:57:04.218 [INFO][5804] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" HandleID="k8s-pod-network.8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0" Jul 6 23:57:04.294284 containerd[1804]: 2025-07-06 23:57:04.278 [INFO][5804] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:04.294284 containerd[1804]: 2025-07-06 23:57:04.278 [INFO][5804] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:04.294284 containerd[1804]: 2025-07-06 23:57:04.287 [WARNING][5804] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" HandleID="k8s-pod-network.8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0" Jul 6 23:57:04.294284 containerd[1804]: 2025-07-06 23:57:04.287 [INFO][5804] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" HandleID="k8s-pod-network.8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--pjxzj-eth0" Jul 6 23:57:04.294284 containerd[1804]: 2025-07-06 23:57:04.289 [INFO][5804] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:04.294284 containerd[1804]: 2025-07-06 23:57:04.291 [INFO][5797] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0" Jul 6 23:57:04.294284 containerd[1804]: time="2025-07-06T23:57:04.293972392Z" level=info msg="TearDown network for sandbox \"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\" successfully" Jul 6 23:57:04.329346 containerd[1804]: time="2025-07-06T23:57:04.328842236Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:04.329346 containerd[1804]: time="2025-07-06T23:57:04.328948037Z" level=info msg="RemovePodSandbox \"8382a485954010857554239d0c9ab1d3da7c452f9f947af908401012f69762b0\" returns successfully" Jul 6 23:57:04.331143 containerd[1804]: time="2025-07-06T23:57:04.330214242Z" level=info msg="StopPodSandbox for \"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\"" Jul 6 23:57:04.574050 containerd[1804]: 2025-07-06 23:57:04.505 [WARNING][5819] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-whisker--76948fb6d9--8kpvd-eth0" Jul 6 23:57:04.574050 containerd[1804]: 2025-07-06 23:57:04.505 [INFO][5819] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Jul 6 23:57:04.574050 containerd[1804]: 2025-07-06 23:57:04.505 [INFO][5819] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" iface="eth0" netns="" Jul 6 23:57:04.574050 containerd[1804]: 2025-07-06 23:57:04.505 [INFO][5819] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Jul 6 23:57:04.574050 containerd[1804]: 2025-07-06 23:57:04.505 [INFO][5819] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Jul 6 23:57:04.574050 containerd[1804]: 2025-07-06 23:57:04.550 [INFO][5831] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" HandleID="k8s-pod-network.9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Workload="ci--4081.3.4--a--fe0535f741-k8s-whisker--76948fb6d9--8kpvd-eth0" Jul 6 23:57:04.574050 containerd[1804]: 2025-07-06 23:57:04.550 [INFO][5831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:04.574050 containerd[1804]: 2025-07-06 23:57:04.550 [INFO][5831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:04.574050 containerd[1804]: 2025-07-06 23:57:04.567 [WARNING][5831] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" HandleID="k8s-pod-network.9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Workload="ci--4081.3.4--a--fe0535f741-k8s-whisker--76948fb6d9--8kpvd-eth0" Jul 6 23:57:04.574050 containerd[1804]: 2025-07-06 23:57:04.567 [INFO][5831] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" HandleID="k8s-pod-network.9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Workload="ci--4081.3.4--a--fe0535f741-k8s-whisker--76948fb6d9--8kpvd-eth0" Jul 6 23:57:04.574050 containerd[1804]: 2025-07-06 23:57:04.570 [INFO][5831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:04.574050 containerd[1804]: 2025-07-06 23:57:04.572 [INFO][5819] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Jul 6 23:57:04.575150 containerd[1804]: time="2025-07-06T23:57:04.574767454Z" level=info msg="TearDown network for sandbox \"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\" successfully" Jul 6 23:57:04.575150 containerd[1804]: time="2025-07-06T23:57:04.574811254Z" level=info msg="StopPodSandbox for \"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\" returns successfully" Jul 6 23:57:04.577617 containerd[1804]: time="2025-07-06T23:57:04.577112863Z" level=info msg="RemovePodSandbox for \"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\"" Jul 6 23:57:04.577617 containerd[1804]: time="2025-07-06T23:57:04.577239364Z" level=info msg="Forcibly stopping sandbox \"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\"" Jul 6 23:57:04.681267 containerd[1804]: 2025-07-06 23:57:04.625 [WARNING][5846] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" WorkloadEndpoint="ci--4081.3.4--a--fe0535f741-k8s-whisker--76948fb6d9--8kpvd-eth0" Jul 6 23:57:04.681267 containerd[1804]: 2025-07-06 23:57:04.627 [INFO][5846] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Jul 6 23:57:04.681267 containerd[1804]: 2025-07-06 23:57:04.627 [INFO][5846] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" iface="eth0" netns="" Jul 6 23:57:04.681267 containerd[1804]: 2025-07-06 23:57:04.627 [INFO][5846] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Jul 6 23:57:04.681267 containerd[1804]: 2025-07-06 23:57:04.627 [INFO][5846] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Jul 6 23:57:04.681267 containerd[1804]: 2025-07-06 23:57:04.662 [INFO][5853] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" HandleID="k8s-pod-network.9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Workload="ci--4081.3.4--a--fe0535f741-k8s-whisker--76948fb6d9--8kpvd-eth0" Jul 6 23:57:04.681267 containerd[1804]: 2025-07-06 23:57:04.662 [INFO][5853] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:04.681267 containerd[1804]: 2025-07-06 23:57:04.662 [INFO][5853] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:04.681267 containerd[1804]: 2025-07-06 23:57:04.674 [WARNING][5853] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" HandleID="k8s-pod-network.9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Workload="ci--4081.3.4--a--fe0535f741-k8s-whisker--76948fb6d9--8kpvd-eth0" Jul 6 23:57:04.681267 containerd[1804]: 2025-07-06 23:57:04.674 [INFO][5853] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" HandleID="k8s-pod-network.9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Workload="ci--4081.3.4--a--fe0535f741-k8s-whisker--76948fb6d9--8kpvd-eth0" Jul 6 23:57:04.681267 containerd[1804]: 2025-07-06 23:57:04.676 [INFO][5853] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:04.681267 containerd[1804]: 2025-07-06 23:57:04.679 [INFO][5846] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3" Jul 6 23:57:04.682322 containerd[1804]: time="2025-07-06T23:57:04.682002097Z" level=info msg="TearDown network for sandbox \"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\" successfully" Jul 6 23:57:07.380880 containerd[1804]: time="2025-07-06T23:57:07.380807759Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:07.383855 containerd[1804]: time="2025-07-06T23:57:07.380912660Z" level=info msg="RemovePodSandbox \"9952bbacd14ba1de6d884fd3ade4b45b34f037fd86d34d3146b2429dd997eeb3\" returns successfully" Jul 6 23:57:07.383855 containerd[1804]: time="2025-07-06T23:57:07.382683767Z" level=info msg="StopPodSandbox for \"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\"" Jul 6 23:57:07.391991 containerd[1804]: time="2025-07-06T23:57:07.391950105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:07.397361 containerd[1804]: time="2025-07-06T23:57:07.397310928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 6 23:57:07.406138 containerd[1804]: time="2025-07-06T23:57:07.404203356Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:07.421023 containerd[1804]: time="2025-07-06T23:57:07.418594316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 8.617168556s" Jul 6 23:57:07.421273 containerd[1804]: time="2025-07-06T23:57:07.421232126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 6 23:57:07.421425 containerd[1804]: time="2025-07-06T23:57:07.421180926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:07.431161 containerd[1804]: time="2025-07-06T23:57:07.429755862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 6 23:57:07.435576 containerd[1804]: time="2025-07-06T23:57:07.435395385Z" level=info msg="CreateContainer within sandbox \"9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 6 23:57:07.492664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3237416121.mount: Deactivated successfully. Jul 6 23:57:07.499452 containerd[1804]: time="2025-07-06T23:57:07.499398750Z" level=info msg="CreateContainer within sandbox \"9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2428782712e2a18e16510e4be6f61acfb08296af5cec0847013476e36cf82cb6\"" Jul 6 23:57:07.503309 containerd[1804]: time="2025-07-06T23:57:07.501462158Z" level=info msg="StartContainer for \"2428782712e2a18e16510e4be6f61acfb08296af5cec0847013476e36cf82cb6\"" Jul 6 23:57:07.625669 containerd[1804]: 2025-07-06 23:57:07.551 [WARNING][5867] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"58666ff6-c819-4067-ae41-b5a4a7ab70fc", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331", Pod:"coredns-7c65d6cfc9-nh8m8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6517bf13166", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:07.625669 containerd[1804]: 2025-07-06 23:57:07.552 [INFO][5867] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Jul 6 23:57:07.625669 containerd[1804]: 2025-07-06 23:57:07.552 [INFO][5867] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" iface="eth0" netns="" Jul 6 23:57:07.625669 containerd[1804]: 2025-07-06 23:57:07.553 [INFO][5867] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Jul 6 23:57:07.625669 containerd[1804]: 2025-07-06 23:57:07.553 [INFO][5867] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Jul 6 23:57:07.625669 containerd[1804]: 2025-07-06 23:57:07.607 [INFO][5889] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" HandleID="k8s-pod-network.33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0" Jul 6 23:57:07.625669 containerd[1804]: 2025-07-06 23:57:07.608 [INFO][5889] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:07.625669 containerd[1804]: 2025-07-06 23:57:07.608 [INFO][5889] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:07.625669 containerd[1804]: 2025-07-06 23:57:07.618 [WARNING][5889] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" HandleID="k8s-pod-network.33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0" Jul 6 23:57:07.625669 containerd[1804]: 2025-07-06 23:57:07.618 [INFO][5889] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" HandleID="k8s-pod-network.33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0" Jul 6 23:57:07.625669 containerd[1804]: 2025-07-06 23:57:07.622 [INFO][5889] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:07.625669 containerd[1804]: 2025-07-06 23:57:07.623 [INFO][5867] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Jul 6 23:57:07.626525 containerd[1804]: time="2025-07-06T23:57:07.626477375Z" level=info msg="TearDown network for sandbox \"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\" successfully" Jul 6 23:57:07.626696 containerd[1804]: time="2025-07-06T23:57:07.626579076Z" level=info msg="StopPodSandbox for \"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\" returns successfully" Jul 6 23:57:07.627646 containerd[1804]: time="2025-07-06T23:57:07.627615180Z" level=info msg="RemovePodSandbox for \"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\"" Jul 6 23:57:07.628003 containerd[1804]: time="2025-07-06T23:57:07.627852681Z" level=info msg="Forcibly stopping sandbox \"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\"" Jul 6 23:57:07.772004 containerd[1804]: 2025-07-06 23:57:07.698 [WARNING][5913] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"58666ff6-c819-4067-ae41-b5a4a7ab70fc", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"a83ba5ae6b03a3c407ca65c1d040fb32e72c5d7bab30fbdeef7a2f8e00b2e331", Pod:"coredns-7c65d6cfc9-nh8m8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6517bf13166", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:07.772004 containerd[1804]: 2025-07-06 23:57:07.699 [INFO][5913] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Jul 6 23:57:07.772004 containerd[1804]: 2025-07-06 23:57:07.699 [INFO][5913] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" iface="eth0" netns="" Jul 6 23:57:07.772004 containerd[1804]: 2025-07-06 23:57:07.699 [INFO][5913] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Jul 6 23:57:07.772004 containerd[1804]: 2025-07-06 23:57:07.699 [INFO][5913] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Jul 6 23:57:07.772004 containerd[1804]: 2025-07-06 23:57:07.736 [INFO][5920] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" HandleID="k8s-pod-network.33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0" Jul 6 23:57:07.772004 containerd[1804]: 2025-07-06 23:57:07.737 [INFO][5920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:07.772004 containerd[1804]: 2025-07-06 23:57:07.738 [INFO][5920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:07.772004 containerd[1804]: 2025-07-06 23:57:07.752 [WARNING][5920] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" HandleID="k8s-pod-network.33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0" Jul 6 23:57:07.772004 containerd[1804]: 2025-07-06 23:57:07.755 [INFO][5920] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" HandleID="k8s-pod-network.33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Workload="ci--4081.3.4--a--fe0535f741-k8s-coredns--7c65d6cfc9--nh8m8-eth0" Jul 6 23:57:07.772004 containerd[1804]: 2025-07-06 23:57:07.760 [INFO][5920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:07.772004 containerd[1804]: 2025-07-06 23:57:07.764 [INFO][5913] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe" Jul 6 23:57:07.772004 containerd[1804]: time="2025-07-06T23:57:07.770406871Z" level=info msg="TearDown network for sandbox \"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\" successfully" Jul 6 23:57:07.786413 containerd[1804]: time="2025-07-06T23:57:07.786329537Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:07.789169 containerd[1804]: time="2025-07-06T23:57:07.787872043Z" level=info msg="RemovePodSandbox \"33419a5ed536369aa04ac8f4df9355b48ab302e59e02e36d3a464f029ef956fe\" returns successfully" Jul 6 23:57:07.789169 containerd[1804]: time="2025-07-06T23:57:07.787975043Z" level=info msg="StartContainer for \"2428782712e2a18e16510e4be6f61acfb08296af5cec0847013476e36cf82cb6\" returns successfully" Jul 6 23:57:07.789531 containerd[1804]: time="2025-07-06T23:57:07.789504550Z" level=info msg="StopPodSandbox for \"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\"" Jul 6 23:57:07.928462 containerd[1804]: 2025-07-06 23:57:07.850 [WARNING][5944] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0", GenerateName:"calico-kube-controllers-79f7f6c588-", Namespace:"calico-system", SelfLink:"", UID:"0b182eb4-112d-494c-ad49-a4d43ae37b16", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79f7f6c588", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66", Pod:"calico-kube-controllers-79f7f6c588-x5rcf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliebe28a49cf1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:07.928462 containerd[1804]: 2025-07-06 23:57:07.851 [INFO][5944] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Jul 6 23:57:07.928462 containerd[1804]: 2025-07-06 23:57:07.851 [INFO][5944] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" iface="eth0" netns="" Jul 6 23:57:07.928462 containerd[1804]: 2025-07-06 23:57:07.851 [INFO][5944] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Jul 6 23:57:07.928462 containerd[1804]: 2025-07-06 23:57:07.851 [INFO][5944] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Jul 6 23:57:07.928462 containerd[1804]: 2025-07-06 23:57:07.907 [INFO][5952] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" HandleID="k8s-pod-network.69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0" Jul 6 23:57:07.928462 containerd[1804]: 2025-07-06 23:57:07.907 [INFO][5952] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:07.928462 containerd[1804]: 2025-07-06 23:57:07.908 [INFO][5952] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:07.928462 containerd[1804]: 2025-07-06 23:57:07.921 [WARNING][5952] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" HandleID="k8s-pod-network.69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0" Jul 6 23:57:07.928462 containerd[1804]: 2025-07-06 23:57:07.921 [INFO][5952] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" HandleID="k8s-pod-network.69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0" Jul 6 23:57:07.928462 containerd[1804]: 2025-07-06 23:57:07.923 [INFO][5952] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:07.928462 containerd[1804]: 2025-07-06 23:57:07.925 [INFO][5944] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Jul 6 23:57:07.932223 containerd[1804]: time="2025-07-06T23:57:07.929820630Z" level=info msg="TearDown network for sandbox \"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\" successfully" Jul 6 23:57:07.932223 containerd[1804]: time="2025-07-06T23:57:07.929864630Z" level=info msg="StopPodSandbox for \"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\" returns successfully" Jul 6 23:57:07.932792 containerd[1804]: time="2025-07-06T23:57:07.932204240Z" level=info msg="RemovePodSandbox for \"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\"" Jul 6 23:57:07.932792 containerd[1804]: time="2025-07-06T23:57:07.932563741Z" level=info msg="Forcibly stopping sandbox \"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\"" Jul 6 23:57:08.053828 containerd[1804]: 2025-07-06 23:57:08.003 [WARNING][5967] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0", GenerateName:"calico-kube-controllers-79f7f6c588-", Namespace:"calico-system", SelfLink:"", UID:"0b182eb4-112d-494c-ad49-a4d43ae37b16", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79f7f6c588", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66", Pod:"calico-kube-controllers-79f7f6c588-x5rcf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliebe28a49cf1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:08.053828 containerd[1804]: 2025-07-06 23:57:08.004 [INFO][5967] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Jul 6 23:57:08.053828 containerd[1804]: 2025-07-06 23:57:08.004 [INFO][5967] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" iface="eth0" netns="" Jul 6 23:57:08.053828 containerd[1804]: 2025-07-06 23:57:08.004 [INFO][5967] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Jul 6 23:57:08.053828 containerd[1804]: 2025-07-06 23:57:08.004 [INFO][5967] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Jul 6 23:57:08.053828 containerd[1804]: 2025-07-06 23:57:08.040 [INFO][5977] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" HandleID="k8s-pod-network.69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0" Jul 6 23:57:08.053828 containerd[1804]: 2025-07-06 23:57:08.041 [INFO][5977] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:08.053828 containerd[1804]: 2025-07-06 23:57:08.041 [INFO][5977] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:08.053828 containerd[1804]: 2025-07-06 23:57:08.048 [WARNING][5977] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" HandleID="k8s-pod-network.69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0" Jul 6 23:57:08.053828 containerd[1804]: 2025-07-06 23:57:08.048 [INFO][5977] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" HandleID="k8s-pod-network.69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--kube--controllers--79f7f6c588--x5rcf-eth0" Jul 6 23:57:08.053828 containerd[1804]: 2025-07-06 23:57:08.050 [INFO][5977] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:08.053828 containerd[1804]: 2025-07-06 23:57:08.052 [INFO][5967] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054" Jul 6 23:57:08.054552 containerd[1804]: time="2025-07-06T23:57:08.053891543Z" level=info msg="TearDown network for sandbox \"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\" successfully" Jul 6 23:57:08.074352 containerd[1804]: time="2025-07-06T23:57:08.074073227Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:08.074352 containerd[1804]: time="2025-07-06T23:57:08.074196927Z" level=info msg="RemovePodSandbox \"69b7f86f4951906b8e375e0c7ae8fef90ab2fc4da87323c5c70c43c705072054\" returns successfully" Jul 6 23:57:08.075654 containerd[1804]: time="2025-07-06T23:57:08.075268332Z" level=info msg="StopPodSandbox for \"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\"" Jul 6 23:57:08.177518 containerd[1804]: 2025-07-06 23:57:08.124 [WARNING][5991] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0", GenerateName:"calico-apiserver-68646bbcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"f805d877-66eb-46da-b324-d84c54cb40ca", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68646bbcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860", Pod:"calico-apiserver-68646bbcb-rvxtr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ff430243d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:08.177518 containerd[1804]: 2025-07-06 23:57:08.125 [INFO][5991] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Jul 6 23:57:08.177518 containerd[1804]: 2025-07-06 23:57:08.125 [INFO][5991] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" iface="eth0" netns="" Jul 6 23:57:08.177518 containerd[1804]: 2025-07-06 23:57:08.125 [INFO][5991] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Jul 6 23:57:08.177518 containerd[1804]: 2025-07-06 23:57:08.125 [INFO][5991] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Jul 6 23:57:08.177518 containerd[1804]: 2025-07-06 23:57:08.160 [INFO][5999] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" HandleID="k8s-pod-network.826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0" Jul 6 23:57:08.177518 containerd[1804]: 2025-07-06 23:57:08.161 [INFO][5999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:08.177518 containerd[1804]: 2025-07-06 23:57:08.161 [INFO][5999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:08.177518 containerd[1804]: 2025-07-06 23:57:08.172 [WARNING][5999] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" HandleID="k8s-pod-network.826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0" Jul 6 23:57:08.177518 containerd[1804]: 2025-07-06 23:57:08.172 [INFO][5999] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" HandleID="k8s-pod-network.826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0" Jul 6 23:57:08.177518 containerd[1804]: 2025-07-06 23:57:08.174 [INFO][5999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:08.177518 containerd[1804]: 2025-07-06 23:57:08.175 [INFO][5991] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Jul 6 23:57:08.179312 containerd[1804]: time="2025-07-06T23:57:08.178280958Z" level=info msg="TearDown network for sandbox \"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\" successfully" Jul 6 23:57:08.179312 containerd[1804]: time="2025-07-06T23:57:08.178324558Z" level=info msg="StopPodSandbox for \"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\" returns successfully" Jul 6 23:57:08.179312 containerd[1804]: time="2025-07-06T23:57:08.179038861Z" level=info msg="RemovePodSandbox for \"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\"" Jul 6 23:57:08.179312 containerd[1804]: time="2025-07-06T23:57:08.179071861Z" level=info msg="Forcibly stopping sandbox \"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\"" Jul 6 23:57:08.334435 containerd[1804]: 2025-07-06 23:57:08.246 [WARNING][6014] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0", GenerateName:"calico-apiserver-68646bbcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"f805d877-66eb-46da-b324-d84c54cb40ca", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68646bbcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"e30f7d1f1932d020cb01492d4834a2fdddb18f6f4006520f041c9629a506a860", Pod:"calico-apiserver-68646bbcb-rvxtr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ff430243d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:08.334435 containerd[1804]: 2025-07-06 23:57:08.247 [INFO][6014] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Jul 6 23:57:08.334435 containerd[1804]: 2025-07-06 23:57:08.247 [INFO][6014] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" iface="eth0" netns="" Jul 6 23:57:08.334435 containerd[1804]: 2025-07-06 23:57:08.247 [INFO][6014] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Jul 6 23:57:08.334435 containerd[1804]: 2025-07-06 23:57:08.247 [INFO][6014] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Jul 6 23:57:08.334435 containerd[1804]: 2025-07-06 23:57:08.308 [INFO][6022] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" HandleID="k8s-pod-network.826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0" Jul 6 23:57:08.334435 containerd[1804]: 2025-07-06 23:57:08.311 [INFO][6022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:08.334435 containerd[1804]: 2025-07-06 23:57:08.311 [INFO][6022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:08.334435 containerd[1804]: 2025-07-06 23:57:08.323 [WARNING][6022] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" HandleID="k8s-pod-network.826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0" Jul 6 23:57:08.334435 containerd[1804]: 2025-07-06 23:57:08.323 [INFO][6022] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" HandleID="k8s-pod-network.826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--rvxtr-eth0" Jul 6 23:57:08.334435 containerd[1804]: 2025-07-06 23:57:08.326 [INFO][6022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:08.334435 containerd[1804]: 2025-07-06 23:57:08.329 [INFO][6014] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190" Jul 6 23:57:08.334435 containerd[1804]: time="2025-07-06T23:57:08.334014402Z" level=info msg="TearDown network for sandbox \"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\" successfully" Jul 6 23:57:08.353717 containerd[1804]: time="2025-07-06T23:57:08.350597370Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:08.353717 containerd[1804]: time="2025-07-06T23:57:08.350691571Z" level=info msg="RemovePodSandbox \"826fc20da8489c62ec12d3618b4d8176af5f7c46dad5d37f91f6e61f39311190\" returns successfully" Jul 6 23:57:08.353717 containerd[1804]: time="2025-07-06T23:57:08.353308082Z" level=info msg="StopPodSandbox for \"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\"" Jul 6 23:57:08.516347 kubelet[3292]: I0706 23:57:08.516255 3292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-698d94675f-jg87x" podStartSLOduration=11.169895632 podStartE2EDuration="21.516229655s" podCreationTimestamp="2025-07-06 23:56:47 +0000 UTC" firstStartedPulling="2025-07-06 23:56:48.122089903 +0000 UTC m=+44.878421644" lastFinishedPulling="2025-07-06 23:56:58.468423826 +0000 UTC m=+55.224755667" observedRunningTime="2025-07-06 23:56:59.462566306 +0000 UTC m=+56.218898147" watchObservedRunningTime="2025-07-06 23:57:08.516229655 +0000 UTC m=+65.272561496" Jul 6 23:57:08.629042 containerd[1804]: 2025-07-06 23:57:08.461 [WARNING][6038] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d05fe5f5-a0d0-4818-841f-97f17bafd42f", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b", Pod:"csi-node-driver-q2vkj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3923dbc8ec6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:08.629042 containerd[1804]: 2025-07-06 23:57:08.462 [INFO][6038] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Jul 6 23:57:08.629042 containerd[1804]: 2025-07-06 23:57:08.462 [INFO][6038] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" iface="eth0" netns="" Jul 6 23:57:08.629042 containerd[1804]: 2025-07-06 23:57:08.462 [INFO][6038] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Jul 6 23:57:08.629042 containerd[1804]: 2025-07-06 23:57:08.462 [INFO][6038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Jul 6 23:57:08.629042 containerd[1804]: 2025-07-06 23:57:08.594 [INFO][6046] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" HandleID="k8s-pod-network.a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Workload="ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0" Jul 6 23:57:08.629042 containerd[1804]: 2025-07-06 23:57:08.594 [INFO][6046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:08.629042 containerd[1804]: 2025-07-06 23:57:08.594 [INFO][6046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:08.629042 containerd[1804]: 2025-07-06 23:57:08.610 [WARNING][6046] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" HandleID="k8s-pod-network.a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Workload="ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0" Jul 6 23:57:08.629042 containerd[1804]: 2025-07-06 23:57:08.610 [INFO][6046] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" HandleID="k8s-pod-network.a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Workload="ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0" Jul 6 23:57:08.629042 containerd[1804]: 2025-07-06 23:57:08.618 [INFO][6046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:08.629042 containerd[1804]: 2025-07-06 23:57:08.623 [INFO][6038] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Jul 6 23:57:08.629042 containerd[1804]: time="2025-07-06T23:57:08.627103014Z" level=info msg="TearDown network for sandbox \"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\" successfully" Jul 6 23:57:08.629042 containerd[1804]: time="2025-07-06T23:57:08.627148814Z" level=info msg="StopPodSandbox for \"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\" returns successfully" Jul 6 23:57:08.629042 containerd[1804]: time="2025-07-06T23:57:08.627682716Z" level=info msg="RemovePodSandbox for \"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\"" Jul 6 23:57:08.629042 containerd[1804]: time="2025-07-06T23:57:08.627716017Z" level=info msg="Forcibly stopping sandbox \"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\"" Jul 6 23:57:08.861279 containerd[1804]: 2025-07-06 23:57:08.758 [WARNING][6082] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d05fe5f5-a0d0-4818-841f-97f17bafd42f", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b", Pod:"csi-node-driver-q2vkj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3923dbc8ec6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:08.861279 containerd[1804]: 2025-07-06 23:57:08.761 [INFO][6082] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Jul 6 23:57:08.861279 containerd[1804]: 2025-07-06 23:57:08.761 [INFO][6082] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" iface="eth0" netns="" Jul 6 23:57:08.861279 containerd[1804]: 2025-07-06 23:57:08.761 [INFO][6082] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Jul 6 23:57:08.861279 containerd[1804]: 2025-07-06 23:57:08.761 [INFO][6082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Jul 6 23:57:08.861279 containerd[1804]: 2025-07-06 23:57:08.838 [INFO][6089] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" HandleID="k8s-pod-network.a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Workload="ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0" Jul 6 23:57:08.861279 containerd[1804]: 2025-07-06 23:57:08.838 [INFO][6089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:08.861279 containerd[1804]: 2025-07-06 23:57:08.840 [INFO][6089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:08.861279 containerd[1804]: 2025-07-06 23:57:08.850 [WARNING][6089] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" HandleID="k8s-pod-network.a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Workload="ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0" Jul 6 23:57:08.861279 containerd[1804]: 2025-07-06 23:57:08.850 [INFO][6089] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" HandleID="k8s-pod-network.a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Workload="ci--4081.3.4--a--fe0535f741-k8s-csi--node--driver--q2vkj-eth0" Jul 6 23:57:08.861279 containerd[1804]: 2025-07-06 23:57:08.853 [INFO][6089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:08.861279 containerd[1804]: 2025-07-06 23:57:08.857 [INFO][6082] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538" Jul 6 23:57:08.863405 containerd[1804]: time="2025-07-06T23:57:08.863260591Z" level=info msg="TearDown network for sandbox \"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\" successfully" Jul 6 23:57:08.882425 containerd[1804]: time="2025-07-06T23:57:08.882152469Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:08.885283 containerd[1804]: time="2025-07-06T23:57:08.885247582Z" level=info msg="RemovePodSandbox \"a90e999e4422893b8f06f662a523723ccbb931b48e43c09ecaa5d6a4c919f538\" returns successfully" Jul 6 23:57:08.889579 containerd[1804]: time="2025-07-06T23:57:08.888969697Z" level=info msg="StopPodSandbox for \"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\"" Jul 6 23:57:09.129314 containerd[1804]: 2025-07-06 23:57:09.009 [WARNING][6104] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"84b426be-d8f6-4a60-8c2e-1c346fd9da79", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6", Pod:"goldmane-58fd7646b9-xhs8b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic7765db8f15", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:09.129314 containerd[1804]: 2025-07-06 23:57:09.011 [INFO][6104] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Jul 6 23:57:09.129314 containerd[1804]: 2025-07-06 23:57:09.011 [INFO][6104] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" iface="eth0" netns="" Jul 6 23:57:09.129314 containerd[1804]: 2025-07-06 23:57:09.011 [INFO][6104] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Jul 6 23:57:09.129314 containerd[1804]: 2025-07-06 23:57:09.011 [INFO][6104] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Jul 6 23:57:09.129314 containerd[1804]: 2025-07-06 23:57:09.106 [INFO][6111] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" HandleID="k8s-pod-network.80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Workload="ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0" Jul 6 23:57:09.129314 containerd[1804]: 2025-07-06 23:57:09.106 [INFO][6111] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:09.129314 containerd[1804]: 2025-07-06 23:57:09.106 [INFO][6111] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:09.129314 containerd[1804]: 2025-07-06 23:57:09.117 [WARNING][6111] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" HandleID="k8s-pod-network.80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Workload="ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0" Jul 6 23:57:09.129314 containerd[1804]: 2025-07-06 23:57:09.118 [INFO][6111] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" HandleID="k8s-pod-network.80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Workload="ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0" Jul 6 23:57:09.129314 containerd[1804]: 2025-07-06 23:57:09.122 [INFO][6111] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:09.129314 containerd[1804]: 2025-07-06 23:57:09.125 [INFO][6104] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Jul 6 23:57:09.132382 containerd[1804]: time="2025-07-06T23:57:09.129487292Z" level=info msg="TearDown network for sandbox \"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\" successfully" Jul 6 23:57:09.132382 containerd[1804]: time="2025-07-06T23:57:09.129520892Z" level=info msg="StopPodSandbox for \"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\" returns successfully" Jul 6 23:57:09.132382 containerd[1804]: time="2025-07-06T23:57:09.131461700Z" level=info msg="RemovePodSandbox for \"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\"" Jul 6 23:57:09.132382 containerd[1804]: time="2025-07-06T23:57:09.131502400Z" level=info msg="Forcibly stopping sandbox \"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\"" Jul 6 23:57:09.402889 containerd[1804]: 2025-07-06 23:57:09.276 [WARNING][6125] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"84b426be-d8f6-4a60-8c2e-1c346fd9da79", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"9bceee83bf2cfdcb4efbf7c9bb454dac13d60ce3487b07708eb56a51704233b6", Pod:"goldmane-58fd7646b9-xhs8b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic7765db8f15", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:09.402889 containerd[1804]: 2025-07-06 23:57:09.276 [INFO][6125] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Jul 6 23:57:09.402889 containerd[1804]: 2025-07-06 23:57:09.277 [INFO][6125] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" iface="eth0" netns="" Jul 6 23:57:09.402889 containerd[1804]: 2025-07-06 23:57:09.277 [INFO][6125] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Jul 6 23:57:09.402889 containerd[1804]: 2025-07-06 23:57:09.277 [INFO][6125] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Jul 6 23:57:09.402889 containerd[1804]: 2025-07-06 23:57:09.367 [INFO][6133] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" HandleID="k8s-pod-network.80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Workload="ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0" Jul 6 23:57:09.402889 containerd[1804]: 2025-07-06 23:57:09.367 [INFO][6133] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:09.402889 containerd[1804]: 2025-07-06 23:57:09.367 [INFO][6133] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:09.402889 containerd[1804]: 2025-07-06 23:57:09.393 [WARNING][6133] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" HandleID="k8s-pod-network.80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Workload="ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0" Jul 6 23:57:09.402889 containerd[1804]: 2025-07-06 23:57:09.393 [INFO][6133] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" HandleID="k8s-pod-network.80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Workload="ci--4081.3.4--a--fe0535f741-k8s-goldmane--58fd7646b9--xhs8b-eth0" Jul 6 23:57:09.402889 containerd[1804]: 2025-07-06 23:57:09.396 [INFO][6133] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:09.402889 containerd[1804]: 2025-07-06 23:57:09.398 [INFO][6125] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e" Jul 6 23:57:09.405144 containerd[1804]: time="2025-07-06T23:57:09.403669426Z" level=info msg="TearDown network for sandbox \"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\" successfully" Jul 6 23:57:09.544473 systemd[1]: run-containerd-runc-k8s.io-2428782712e2a18e16510e4be6f61acfb08296af5cec0847013476e36cf82cb6-runc.JGGrVL.mount: Deactivated successfully. Jul 6 23:57:09.782176 containerd[1804]: time="2025-07-06T23:57:09.781696989Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:09.783293 containerd[1804]: time="2025-07-06T23:57:09.783136095Z" level=info msg="RemovePodSandbox \"80c1c2e92a3887bc20c2e75aa1f58087979ea0f51d091409de39568d5767d87e\" returns successfully" Jul 6 23:57:09.785291 containerd[1804]: time="2025-07-06T23:57:09.785181804Z" level=info msg="StopPodSandbox for \"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\"" Jul 6 23:57:10.162840 containerd[1804]: 2025-07-06 23:57:10.013 [WARNING][6174] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0", GenerateName:"calico-apiserver-68646bbcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"716cfedd-0158-4dcb-9ac1-1fdba73e9c13", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68646bbcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6", Pod:"calico-apiserver-68646bbcb-2gm8s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd99629f6b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:10.162840 containerd[1804]: 2025-07-06 23:57:10.017 [INFO][6174] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Jul 6 23:57:10.162840 containerd[1804]: 2025-07-06 23:57:10.017 [INFO][6174] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" iface="eth0" netns="" Jul 6 23:57:10.162840 containerd[1804]: 2025-07-06 23:57:10.017 [INFO][6174] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Jul 6 23:57:10.162840 containerd[1804]: 2025-07-06 23:57:10.017 [INFO][6174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Jul 6 23:57:10.162840 containerd[1804]: 2025-07-06 23:57:10.124 [INFO][6181] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" HandleID="k8s-pod-network.167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0" Jul 6 23:57:10.162840 containerd[1804]: 2025-07-06 23:57:10.128 [INFO][6181] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:10.162840 containerd[1804]: 2025-07-06 23:57:10.131 [INFO][6181] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:10.162840 containerd[1804]: 2025-07-06 23:57:10.144 [WARNING][6181] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" HandleID="k8s-pod-network.167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0" Jul 6 23:57:10.162840 containerd[1804]: 2025-07-06 23:57:10.145 [INFO][6181] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" HandleID="k8s-pod-network.167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0" Jul 6 23:57:10.162840 containerd[1804]: 2025-07-06 23:57:10.149 [INFO][6181] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:10.162840 containerd[1804]: 2025-07-06 23:57:10.155 [INFO][6174] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Jul 6 23:57:10.165260 containerd[1804]: time="2025-07-06T23:57:10.163910594Z" level=info msg="TearDown network for sandbox \"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\" successfully" Jul 6 23:57:10.165260 containerd[1804]: time="2025-07-06T23:57:10.164232996Z" level=info msg="StopPodSandbox for \"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\" returns successfully" Jul 6 23:57:10.166566 containerd[1804]: time="2025-07-06T23:57:10.166234906Z" level=info msg="RemovePodSandbox for \"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\"" Jul 6 23:57:10.166566 containerd[1804]: time="2025-07-06T23:57:10.166271506Z" level=info msg="Forcibly stopping sandbox \"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\"" Jul 6 23:57:10.343788 containerd[1804]: 2025-07-06 23:57:10.254 [WARNING][6201] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0", GenerateName:"calico-apiserver-68646bbcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"716cfedd-0158-4dcb-9ac1-1fdba73e9c13", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68646bbcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fe0535f741", ContainerID:"b1477de07e7a0952405dc9b0ea9a84c8713dc44c8ed82d72869232d9b9fbe8b6", Pod:"calico-apiserver-68646bbcb-2gm8s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd99629f6b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:10.343788 containerd[1804]: 2025-07-06 23:57:10.254 [INFO][6201] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Jul 6 23:57:10.343788 containerd[1804]: 2025-07-06 23:57:10.254 [INFO][6201] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" iface="eth0" netns="" Jul 6 23:57:10.343788 containerd[1804]: 2025-07-06 23:57:10.254 [INFO][6201] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Jul 6 23:57:10.343788 containerd[1804]: 2025-07-06 23:57:10.254 [INFO][6201] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Jul 6 23:57:10.343788 containerd[1804]: 2025-07-06 23:57:10.321 [INFO][6208] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" HandleID="k8s-pod-network.167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0" Jul 6 23:57:10.343788 containerd[1804]: 2025-07-06 23:57:10.325 [INFO][6208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:10.343788 containerd[1804]: 2025-07-06 23:57:10.325 [INFO][6208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:10.343788 containerd[1804]: 2025-07-06 23:57:10.334 [WARNING][6208] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" HandleID="k8s-pod-network.167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0" Jul 6 23:57:10.343788 containerd[1804]: 2025-07-06 23:57:10.334 [INFO][6208] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" HandleID="k8s-pod-network.167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Workload="ci--4081.3.4--a--fe0535f741-k8s-calico--apiserver--68646bbcb--2gm8s-eth0" Jul 6 23:57:10.343788 containerd[1804]: 2025-07-06 23:57:10.336 [INFO][6208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:10.343788 containerd[1804]: 2025-07-06 23:57:10.340 [INFO][6201] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635" Jul 6 23:57:10.346425 containerd[1804]: time="2025-07-06T23:57:10.343859725Z" level=info msg="TearDown network for sandbox \"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\" successfully" Jul 6 23:57:10.354762 containerd[1804]: time="2025-07-06T23:57:10.354719281Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:10.355843 containerd[1804]: time="2025-07-06T23:57:10.355379185Z" level=info msg="RemovePodSandbox \"167e5f6e7a533da1a1db07e91e2e6681322f68ace99eff9319a97c39546a3635\" returns successfully" Jul 6 23:57:10.586137 systemd[1]: run-containerd-runc-k8s.io-2428782712e2a18e16510e4be6f61acfb08296af5cec0847013476e36cf82cb6-runc.xFNt9q.mount: Deactivated successfully. Jul 6 23:57:12.021421 containerd[1804]: time="2025-07-06T23:57:12.018672497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:12.022070 containerd[1804]: time="2025-07-06T23:57:12.022023212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 6 23:57:12.025936 containerd[1804]: time="2025-07-06T23:57:12.025142627Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:12.034224 containerd[1804]: time="2025-07-06T23:57:12.034181968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:12.034923 containerd[1804]: time="2025-07-06T23:57:12.034880971Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 4.605083909s" Jul 6 23:57:12.035009 containerd[1804]: time="2025-07-06T23:57:12.034928971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 6 23:57:12.038463 containerd[1804]: time="2025-07-06T23:57:12.038434887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 6 23:57:12.080389 containerd[1804]: time="2025-07-06T23:57:12.079947377Z" level=info msg="CreateContainer within sandbox \"1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 6 23:57:12.115423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1824767066.mount: Deactivated successfully. Jul 6 23:57:12.122144 containerd[1804]: time="2025-07-06T23:57:12.120685464Z" level=info msg="CreateContainer within sandbox \"1aa77ed30ba6257368f2fd769617287b1877be4af34d8b375f84620539196b66\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a92a283036f56edb194f0fd484128b84c54ba5d08bb54393f7352a3e2319b7be\"" Jul 6 23:57:12.124264 containerd[1804]: time="2025-07-06T23:57:12.123071775Z" level=info msg="StartContainer for \"a92a283036f56edb194f0fd484128b84c54ba5d08bb54393f7352a3e2319b7be\"" Jul 6 23:57:12.241027 containerd[1804]: time="2025-07-06T23:57:12.240972314Z" level=info msg="StartContainer for \"a92a283036f56edb194f0fd484128b84c54ba5d08bb54393f7352a3e2319b7be\" returns successfully" Jul 6 23:57:12.559380 kubelet[3292]: I0706 23:57:12.559311 3292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-xhs8b" podStartSLOduration=33.945234512 podStartE2EDuration="50.559285571s" podCreationTimestamp="2025-07-06 23:56:22 +0000 UTC" firstStartedPulling="2025-07-06 23:56:50.814614198 +0000 UTC m=+47.570945939" lastFinishedPulling="2025-07-06 23:57:07.428665257 +0000 UTC m=+64.184996998" observedRunningTime="2025-07-06 23:57:08.520840474 +0000 UTC m=+65.277172315" watchObservedRunningTime="2025-07-06 23:57:12.559285571 +0000 UTC m=+69.315617312" Jul 6 23:57:12.614072 kubelet[3292]: I0706 23:57:12.613997 3292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-79f7f6c588-x5rcf" podStartSLOduration=29.395324556 podStartE2EDuration="50.613974021s" podCreationTimestamp="2025-07-06 23:56:22 +0000 UTC" firstStartedPulling="2025-07-06 23:56:50.819488321 +0000 UTC m=+47.575820062" lastFinishedPulling="2025-07-06 23:57:12.038137786 +0000 UTC m=+68.794469527" observedRunningTime="2025-07-06 23:57:12.562274585 +0000 UTC m=+69.318606426" watchObservedRunningTime="2025-07-06 23:57:12.613974021 +0000 UTC m=+69.370305762" Jul 6 23:57:13.507148 containerd[1804]: time="2025-07-06T23:57:13.505355101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:13.509153 containerd[1804]: time="2025-07-06T23:57:13.509076718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 6 23:57:13.513773 containerd[1804]: time="2025-07-06T23:57:13.513726939Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:13.521295 containerd[1804]: time="2025-07-06T23:57:13.521261874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:13.524286 containerd[1804]: time="2025-07-06T23:57:13.524250287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.4857784s" Jul 6 23:57:13.524406 containerd[1804]: time="2025-07-06T23:57:13.524292387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 6 23:57:13.531263 containerd[1804]: time="2025-07-06T23:57:13.531233719Z" level=info msg="CreateContainer within sandbox \"399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 6 23:57:13.575550 containerd[1804]: time="2025-07-06T23:57:13.575400721Z" level=info msg="CreateContainer within sandbox \"399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3b8d04f0ca3503338c4e7cdbc909823e396ebef2cdaefdd447106f4a3d432070\"" Jul 6 23:57:13.579006 containerd[1804]: time="2025-07-06T23:57:13.577074529Z" level=info msg="StartContainer for \"3b8d04f0ca3503338c4e7cdbc909823e396ebef2cdaefdd447106f4a3d432070\"" Jul 6 23:57:13.698459 containerd[1804]: time="2025-07-06T23:57:13.698401984Z" level=info msg="StartContainer for \"3b8d04f0ca3503338c4e7cdbc909823e396ebef2cdaefdd447106f4a3d432070\" returns successfully" Jul 6 23:57:13.702264 containerd[1804]: time="2025-07-06T23:57:13.702216102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 6 23:57:18.024514 containerd[1804]: time="2025-07-06T23:57:18.023151377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:18.026077 containerd[1804]: time="2025-07-06T23:57:18.026026590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 6 23:57:18.030816 containerd[1804]: time="2025-07-06T23:57:18.030781911Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:18.036683 containerd[1804]: time="2025-07-06T23:57:18.036639638Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 4.334169635s" Jul 6 23:57:18.036789 containerd[1804]: time="2025-07-06T23:57:18.036688738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 6 23:57:18.038308 containerd[1804]: time="2025-07-06T23:57:18.037423242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:18.040927 containerd[1804]: time="2025-07-06T23:57:18.040895158Z" level=info msg="CreateContainer within sandbox \"399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 6 23:57:18.088534 containerd[1804]: time="2025-07-06T23:57:18.088388075Z" level=info msg="CreateContainer within sandbox \"399f24755280cba58926282e354fc7494c6f0d5bde728bd2c22f4402b9a6bd1b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8b2ceb834d7a4f08e1998b2af9e338fb0333a2f601a3a261e0659faa244f5684\"" Jul 6 23:57:18.089505 containerd[1804]: time="2025-07-06T23:57:18.089256079Z" level=info msg="StartContainer for \"8b2ceb834d7a4f08e1998b2af9e338fb0333a2f601a3a261e0659faa244f5684\"" Jul 6 23:57:18.151447 systemd[1]: run-containerd-runc-k8s.io-8b2ceb834d7a4f08e1998b2af9e338fb0333a2f601a3a261e0659faa244f5684-runc.HZ7kG3.mount: Deactivated successfully. Jul 6 23:57:18.204945 containerd[1804]: time="2025-07-06T23:57:18.204408906Z" level=info msg="StartContainer for \"8b2ceb834d7a4f08e1998b2af9e338fb0333a2f601a3a261e0659faa244f5684\" returns successfully" Jul 6 23:57:18.582501 kubelet[3292]: I0706 23:57:18.581967 3292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-q2vkj" podStartSLOduration=29.934676119 podStartE2EDuration="56.581946034s" podCreationTimestamp="2025-07-06 23:56:22 +0000 UTC" firstStartedPulling="2025-07-06 23:56:51.391144131 +0000 UTC m=+48.147475972" lastFinishedPulling="2025-07-06 23:57:18.038414046 +0000 UTC m=+74.794745887" observedRunningTime="2025-07-06 23:57:18.581543532 +0000 UTC m=+75.337875373" watchObservedRunningTime="2025-07-06 23:57:18.581946034 +0000 UTC m=+75.338277875" Jul 6 23:57:19.037677 kubelet[3292]: I0706 23:57:19.037563 3292 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 6 23:57:19.037677 kubelet[3292]: I0706 23:57:19.037606 3292 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 6 23:57:19.339626 kubelet[3292]: I0706 23:57:19.338675 3292 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:58:07.236859 systemd[1]: run-containerd-runc-k8s.io-2428782712e2a18e16510e4be6f61acfb08296af5cec0847013476e36cf82cb6-runc.khEiUg.mount: Deactivated successfully. Jul 6 23:58:23.575448 systemd[1]: Started sshd@7-10.200.8.39:22-10.200.16.10:48950.service - OpenSSH per-connection server daemon (10.200.16.10:48950). Jul 6 23:58:24.199310 sshd[6601]: Accepted publickey for core from 10.200.16.10 port 48950 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:24.200994 sshd[6601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:24.206445 systemd-logind[1768]: New session 10 of user core. Jul 6 23:58:24.210448 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:58:24.707001 sshd[6601]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:24.711325 systemd[1]: sshd@7-10.200.8.39:22-10.200.16.10:48950.service: Deactivated successfully. Jul 6 23:58:24.717425 systemd-logind[1768]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:58:24.718293 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:58:24.720269 systemd-logind[1768]: Removed session 10. Jul 6 23:58:29.828580 systemd[1]: Started sshd@8-10.200.8.39:22-10.200.16.10:37600.service - OpenSSH per-connection server daemon (10.200.16.10:37600). Jul 6 23:58:30.480443 sshd[6635]: Accepted publickey for core from 10.200.16.10 port 37600 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:30.482070 sshd[6635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:30.487049 systemd-logind[1768]: New session 11 of user core. Jul 6 23:58:30.490384 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:58:31.000371 sshd[6635]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:31.007285 systemd[1]: sshd@8-10.200.8.39:22-10.200.16.10:37600.service: Deactivated successfully. Jul 6 23:58:31.020006 systemd-logind[1768]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:58:31.024853 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:58:31.033930 systemd-logind[1768]: Removed session 11. Jul 6 23:58:36.105760 systemd[1]: Started sshd@9-10.200.8.39:22-10.200.16.10:37616.service - OpenSSH per-connection server daemon (10.200.16.10:37616). Jul 6 23:58:36.728949 sshd[6650]: Accepted publickey for core from 10.200.16.10 port 37616 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:36.730586 sshd[6650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:36.735596 systemd-logind[1768]: New session 12 of user core. Jul 6 23:58:36.744401 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:58:37.233580 sshd[6650]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:37.245674 systemd[1]: sshd@9-10.200.8.39:22-10.200.16.10:37616.service: Deactivated successfully. Jul 6 23:58:37.253056 systemd-logind[1768]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:58:37.254809 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:58:37.258343 systemd-logind[1768]: Removed session 12. Jul 6 23:58:37.337470 systemd[1]: Started sshd@10-10.200.8.39:22-10.200.16.10:37622.service - OpenSSH per-connection server daemon (10.200.16.10:37622). Jul 6 23:58:37.962576 sshd[6704]: Accepted publickey for core from 10.200.16.10 port 37622 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:37.964204 sshd[6704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:37.969073 systemd-logind[1768]: New session 13 of user core. Jul 6 23:58:37.974549 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:58:38.499362 sshd[6704]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:38.504645 systemd[1]: sshd@10-10.200.8.39:22-10.200.16.10:37622.service: Deactivated successfully. Jul 6 23:58:38.509140 systemd-logind[1768]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:58:38.509517 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:58:38.511958 systemd-logind[1768]: Removed session 13. Jul 6 23:58:38.607476 systemd[1]: Started sshd@11-10.200.8.39:22-10.200.16.10:37630.service - OpenSSH per-connection server daemon (10.200.16.10:37630). Jul 6 23:58:39.233574 sshd[6716]: Accepted publickey for core from 10.200.16.10 port 37630 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:39.235501 sshd[6716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:39.240355 systemd-logind[1768]: New session 14 of user core. Jul 6 23:58:39.246441 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:58:39.742718 sshd[6716]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:39.746329 systemd[1]: sshd@11-10.200.8.39:22-10.200.16.10:37630.service: Deactivated successfully. Jul 6 23:58:39.751657 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:58:39.751945 systemd-logind[1768]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:58:39.754162 systemd-logind[1768]: Removed session 14. Jul 6 23:58:42.436755 update_engine[1775]: I20250706 23:58:42.436683 1775 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 6 23:58:42.436755 update_engine[1775]: I20250706 23:58:42.436745 1775 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 6 23:58:42.437528 update_engine[1775]: I20250706 23:58:42.436991 1775 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 6 23:58:42.437734 update_engine[1775]: I20250706 23:58:42.437650 1775 omaha_request_params.cc:62] Current group set to lts Jul 6 23:58:42.437839 update_engine[1775]: I20250706 23:58:42.437812 1775 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 6 23:58:42.438145 update_engine[1775]: I20250706 23:58:42.437906 1775 update_attempter.cc:643] Scheduling an action processor start. Jul 6 23:58:42.438145 update_engine[1775]: I20250706 23:58:42.437934 1775 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 6 23:58:42.438145 update_engine[1775]: I20250706 23:58:42.437978 1775 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 6 23:58:42.438145 update_engine[1775]: I20250706 23:58:42.438068 1775 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 6 23:58:42.438145 update_engine[1775]: I20250706 23:58:42.438079 1775 omaha_request_action.cc:272] Request: Jul 6 23:58:42.438145 update_engine[1775]: Jul 6 23:58:42.438145 update_engine[1775]: Jul 6 23:58:42.438145 update_engine[1775]: Jul 6 23:58:42.438145 update_engine[1775]: Jul 6 23:58:42.438145 update_engine[1775]: Jul 6 23:58:42.438145 update_engine[1775]: Jul 6 23:58:42.438145 update_engine[1775]: Jul 6 23:58:42.438145 update_engine[1775]: Jul 6 23:58:42.438145 update_engine[1775]: I20250706 23:58:42.438091 1775 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:58:42.438790 locksmithd[1824]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 6 23:58:42.440058 update_engine[1775]: I20250706 23:58:42.440019 1775 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:58:42.440500 update_engine[1775]: I20250706 23:58:42.440464 1775 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:58:42.462039 update_engine[1775]: E20250706 23:58:42.461972 1775 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:58:42.462210 update_engine[1775]: I20250706 23:58:42.462090 1775 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 6 23:58:44.855758 systemd[1]: Started sshd@12-10.200.8.39:22-10.200.16.10:47172.service - OpenSSH per-connection server daemon (10.200.16.10:47172). Jul 6 23:58:45.479180 sshd[6735]: Accepted publickey for core from 10.200.16.10 port 47172 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:45.480781 sshd[6735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:45.485828 systemd-logind[1768]: New session 15 of user core. Jul 6 23:58:45.489094 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:58:45.986182 sshd[6735]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:45.990312 systemd[1]: sshd@12-10.200.8.39:22-10.200.16.10:47172.service: Deactivated successfully. Jul 6 23:58:45.995781 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:58:45.997493 systemd-logind[1768]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:58:45.998439 systemd-logind[1768]: Removed session 15. Jul 6 23:58:50.875897 systemd[1]: run-containerd-runc-k8s.io-c3e500f2efeaca2826accf41cffd1e492402943f3183057d247203250357201b-runc.WqAH0S.mount: Deactivated successfully. Jul 6 23:58:51.095454 systemd[1]: Started sshd@13-10.200.8.39:22-10.200.16.10:37036.service - OpenSSH per-connection server daemon (10.200.16.10:37036). Jul 6 23:58:51.719206 sshd[6790]: Accepted publickey for core from 10.200.16.10 port 37036 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:51.721009 sshd[6790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:51.726548 systemd-logind[1768]: New session 16 of user core. Jul 6 23:58:51.732434 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:58:51.833500 systemd[1]: run-containerd-runc-k8s.io-2428782712e2a18e16510e4be6f61acfb08296af5cec0847013476e36cf82cb6-runc.IwoqjR.mount: Deactivated successfully. Jul 6 23:58:52.221601 sshd[6790]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:52.225552 systemd[1]: sshd@13-10.200.8.39:22-10.200.16.10:37036.service: Deactivated successfully. Jul 6 23:58:52.232197 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:58:52.233253 systemd-logind[1768]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:58:52.234205 systemd-logind[1768]: Removed session 16. Jul 6 23:58:52.435270 update_engine[1775]: I20250706 23:58:52.435175 1775 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:58:52.435884 update_engine[1775]: I20250706 23:58:52.435570 1775 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:58:52.435944 update_engine[1775]: I20250706 23:58:52.435914 1775 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:58:52.442259 update_engine[1775]: E20250706 23:58:52.442203 1775 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:58:52.442398 update_engine[1775]: I20250706 23:58:52.442304 1775 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 6 23:58:57.330623 systemd[1]: Started sshd@14-10.200.8.39:22-10.200.16.10:37038.service - OpenSSH per-connection server daemon (10.200.16.10:37038). Jul 6 23:58:57.959877 sshd[6824]: Accepted publickey for core from 10.200.16.10 port 37038 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:57.962920 sshd[6824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:57.971220 systemd-logind[1768]: New session 17 of user core. Jul 6 23:58:57.976443 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:58:58.523999 sshd[6824]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:58.529326 systemd[1]: sshd@14-10.200.8.39:22-10.200.16.10:37038.service: Deactivated successfully. Jul 6 23:58:58.536098 systemd-logind[1768]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:58:58.537015 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:58:58.538987 systemd-logind[1768]: Removed session 17. Jul 6 23:58:58.634879 systemd[1]: Started sshd@15-10.200.8.39:22-10.200.16.10:37046.service - OpenSSH per-connection server daemon (10.200.16.10:37046). Jul 6 23:58:59.281236 sshd[6838]: Accepted publickey for core from 10.200.16.10 port 37046 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:59.283097 sshd[6838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:59.296655 systemd-logind[1768]: New session 18 of user core. Jul 6 23:58:59.301456 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:58:59.990296 sshd[6838]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:59.995159 systemd-logind[1768]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:59:00.000210 systemd[1]: sshd@15-10.200.8.39:22-10.200.16.10:37046.service: Deactivated successfully. Jul 6 23:59:00.005866 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:59:00.007847 systemd-logind[1768]: Removed session 18. Jul 6 23:59:00.097434 systemd[1]: Started sshd@16-10.200.8.39:22-10.200.16.10:52430.service - OpenSSH per-connection server daemon (10.200.16.10:52430). Jul 6 23:59:00.726526 sshd[6850]: Accepted publickey for core from 10.200.16.10 port 52430 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:59:00.728517 sshd[6850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:00.732951 systemd-logind[1768]: New session 19 of user core. Jul 6 23:59:00.736404 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:59:02.438533 update_engine[1775]: I20250706 23:59:02.436620 1775 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:59:02.438533 update_engine[1775]: I20250706 23:59:02.436933 1775 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:59:02.439442 update_engine[1775]: I20250706 23:59:02.439383 1775 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:59:02.474773 update_engine[1775]: E20250706 23:59:02.474622 1775 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:59:02.474773 update_engine[1775]: I20250706 23:59:02.474726 1775 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 6 23:59:03.293548 sshd[6850]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:03.297242 systemd[1]: sshd@16-10.200.8.39:22-10.200.16.10:52430.service: Deactivated successfully. Jul 6 23:59:03.303643 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:59:03.304742 systemd-logind[1768]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:59:03.305847 systemd-logind[1768]: Removed session 19. Jul 6 23:59:03.401488 systemd[1]: Started sshd@17-10.200.8.39:22-10.200.16.10:52432.service - OpenSSH per-connection server daemon (10.200.16.10:52432). Jul 6 23:59:04.024684 sshd[6869]: Accepted publickey for core from 10.200.16.10 port 52432 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:59:04.026368 sshd[6869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:04.030487 systemd-logind[1768]: New session 20 of user core. Jul 6 23:59:04.034383 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:59:04.630089 sshd[6869]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:04.635061 systemd[1]: sshd@17-10.200.8.39:22-10.200.16.10:52432.service: Deactivated successfully. Jul 6 23:59:04.640937 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:59:04.641818 systemd-logind[1768]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:59:04.642915 systemd-logind[1768]: Removed session 20. Jul 6 23:59:04.735885 systemd[1]: Started sshd@18-10.200.8.39:22-10.200.16.10:52448.service - OpenSSH per-connection server daemon (10.200.16.10:52448). Jul 6 23:59:05.360135 sshd[6883]: Accepted publickey for core from 10.200.16.10 port 52448 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:59:05.361710 sshd[6883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:05.366868 systemd-logind[1768]: New session 21 of user core. Jul 6 23:59:05.374376 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:59:05.863038 sshd[6883]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:05.868242 systemd[1]: sshd@18-10.200.8.39:22-10.200.16.10:52448.service: Deactivated successfully. Jul 6 23:59:05.872440 systemd-logind[1768]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:59:05.872790 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:59:05.874475 systemd-logind[1768]: Removed session 21. Jul 6 23:59:10.972421 systemd[1]: Started sshd@19-10.200.8.39:22-10.200.16.10:55174.service - OpenSSH per-connection server daemon (10.200.16.10:55174). Jul 6 23:59:11.601845 sshd[6939]: Accepted publickey for core from 10.200.16.10 port 55174 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:59:11.603473 sshd[6939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:11.608469 systemd-logind[1768]: New session 22 of user core. Jul 6 23:59:11.615702 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:59:12.099109 sshd[6939]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:12.103016 systemd[1]: sshd@19-10.200.8.39:22-10.200.16.10:55174.service: Deactivated successfully. Jul 6 23:59:12.109955 systemd-logind[1768]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:59:12.111486 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:59:12.112861 systemd-logind[1768]: Removed session 22. Jul 6 23:59:12.435439 update_engine[1775]: I20250706 23:59:12.434791 1775 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:59:12.435439 update_engine[1775]: I20250706 23:59:12.435212 1775 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:59:12.436005 update_engine[1775]: I20250706 23:59:12.435563 1775 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:59:12.579382 update_engine[1775]: E20250706 23:59:12.579300 1775 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:59:12.579674 update_engine[1775]: I20250706 23:59:12.579411 1775 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 6 23:59:12.579674 update_engine[1775]: I20250706 23:59:12.579432 1775 omaha_request_action.cc:617] Omaha request response: Jul 6 23:59:12.579674 update_engine[1775]: E20250706 23:59:12.579548 1775 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 6 23:59:12.579674 update_engine[1775]: I20250706 23:59:12.579584 1775 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 6 23:59:12.579674 update_engine[1775]: I20250706 23:59:12.579593 1775 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 6 23:59:12.579674 update_engine[1775]: I20250706 23:59:12.579602 1775 update_attempter.cc:306] Processing Done. Jul 6 23:59:12.579674 update_engine[1775]: E20250706 23:59:12.579622 1775 update_attempter.cc:619] Update failed. Jul 6 23:59:12.579674 update_engine[1775]: I20250706 23:59:12.579636 1775 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 6 23:59:12.579674 update_engine[1775]: I20250706 23:59:12.579646 1775 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 6 23:59:12.579674 update_engine[1775]: I20250706 23:59:12.579656 1775 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 6 23:59:12.580152 update_engine[1775]: I20250706 23:59:12.579766 1775 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 6 23:59:12.580152 update_engine[1775]: I20250706 23:59:12.579802 1775 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 6 23:59:12.580152 update_engine[1775]: I20250706 23:59:12.579814 1775 omaha_request_action.cc:272] Request: Jul 6 23:59:12.580152 update_engine[1775]: Jul 6 23:59:12.580152 update_engine[1775]: Jul 6 23:59:12.580152 update_engine[1775]: Jul 6 23:59:12.580152 update_engine[1775]: Jul 6 23:59:12.580152 update_engine[1775]: Jul 6 23:59:12.580152 update_engine[1775]: Jul 6 23:59:12.580152 update_engine[1775]: I20250706 23:59:12.579825 1775 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:59:12.580152 update_engine[1775]: I20250706 23:59:12.580089 1775 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:59:12.580569 update_engine[1775]: I20250706 23:59:12.580435 1775 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:59:12.580900 locksmithd[1824]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 6 23:59:12.601033 update_engine[1775]: E20250706 23:59:12.600975 1775 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:59:12.601194 update_engine[1775]: I20250706 23:59:12.601052 1775 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 6 23:59:12.601194 update_engine[1775]: I20250706 23:59:12.601063 1775 omaha_request_action.cc:617] Omaha request response: Jul 6 23:59:12.601194 update_engine[1775]: I20250706 23:59:12.601074 1775 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 6 23:59:12.601194 update_engine[1775]: I20250706 23:59:12.601081 1775 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 6 23:59:12.601194 update_engine[1775]: I20250706 23:59:12.601089 1775 update_attempter.cc:306] Processing Done. Jul 6 23:59:12.601194 update_engine[1775]: I20250706 23:59:12.601096 1775 update_attempter.cc:310] Error event sent. Jul 6 23:59:12.601194 update_engine[1775]: I20250706 23:59:12.601110 1775 update_check_scheduler.cc:74] Next update check in 42m39s Jul 6 23:59:12.601605 locksmithd[1824]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 6 23:59:17.210534 systemd[1]: Started sshd@20-10.200.8.39:22-10.200.16.10:55178.service - OpenSSH per-connection server daemon (10.200.16.10:55178). Jul 6 23:59:17.851144 sshd[6957]: Accepted publickey for core from 10.200.16.10 port 55178 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:59:17.850901 sshd[6957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:17.857499 systemd-logind[1768]: New session 23 of user core. Jul 6 23:59:17.862502 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:59:18.545586 sshd[6957]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:18.550861 systemd[1]: sshd@20-10.200.8.39:22-10.200.16.10:55178.service: Deactivated successfully. Jul 6 23:59:18.558101 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:59:18.560073 systemd-logind[1768]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:59:18.562994 systemd-logind[1768]: Removed session 23. Jul 6 23:59:20.888054 systemd[1]: run-containerd-runc-k8s.io-c3e500f2efeaca2826accf41cffd1e492402943f3183057d247203250357201b-runc.ZpSDSG.mount: Deactivated successfully. Jul 6 23:59:23.652482 systemd[1]: Started sshd@21-10.200.8.39:22-10.200.16.10:60378.service - OpenSSH per-connection server daemon (10.200.16.10:60378). Jul 6 23:59:24.280180 sshd[6992]: Accepted publickey for core from 10.200.16.10 port 60378 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:59:24.281784 sshd[6992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:24.286211 systemd-logind[1768]: New session 24 of user core. Jul 6 23:59:24.292396 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:59:24.777972 sshd[6992]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:24.781687 systemd[1]: sshd@21-10.200.8.39:22-10.200.16.10:60378.service: Deactivated successfully. Jul 6 23:59:24.788930 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:59:24.790628 systemd-logind[1768]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:59:24.793001 systemd-logind[1768]: Removed session 24. Jul 6 23:59:29.890297 systemd[1]: Started sshd@22-10.200.8.39:22-10.200.16.10:40276.service - OpenSSH per-connection server daemon (10.200.16.10:40276). Jul 6 23:59:30.527478 sshd[7012]: Accepted publickey for core from 10.200.16.10 port 40276 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:59:30.529383 sshd[7012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:30.537774 systemd-logind[1768]: New session 25 of user core. Jul 6 23:59:30.543591 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:59:31.064419 sshd[7012]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:31.070602 systemd[1]: sshd@22-10.200.8.39:22-10.200.16.10:40276.service: Deactivated successfully. Jul 6 23:59:31.070765 systemd-logind[1768]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:59:31.078843 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:59:31.082541 systemd-logind[1768]: Removed session 25. Jul 6 23:59:36.171968 systemd[1]: Started sshd@23-10.200.8.39:22-10.200.16.10:40290.service - OpenSSH per-connection server daemon (10.200.16.10:40290). Jul 6 23:59:36.794363 sshd[7028]: Accepted publickey for core from 10.200.16.10 port 40290 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:59:36.795935 sshd[7028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:36.800226 systemd-logind[1768]: New session 26 of user core. Jul 6 23:59:36.805676 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:59:37.293929 sshd[7028]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:37.299563 systemd[1]: sshd@23-10.200.8.39:22-10.200.16.10:40290.service: Deactivated successfully. Jul 6 23:59:37.309053 systemd-logind[1768]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:59:37.310452 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:59:37.311986 systemd-logind[1768]: Removed session 26. Jul 6 23:59:42.405483 systemd[1]: Started sshd@24-10.200.8.39:22-10.200.16.10:35550.service - OpenSSH per-connection server daemon (10.200.16.10:35550). Jul 6 23:59:43.027809 sshd[7083]: Accepted publickey for core from 10.200.16.10 port 35550 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:59:43.029432 sshd[7083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:43.034403 systemd-logind[1768]: New session 27 of user core. Jul 6 23:59:43.039375 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 6 23:59:43.531169 sshd[7083]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:43.536474 systemd[1]: sshd@24-10.200.8.39:22-10.200.16.10:35550.service: Deactivated successfully. Jul 6 23:59:43.540883 systemd[1]: session-27.scope: Deactivated successfully. Jul 6 23:59:43.541841 systemd-logind[1768]: Session 27 logged out. Waiting for processes to exit. Jul 6 23:59:43.542897 systemd-logind[1768]: Removed session 27.