Jul 6 23:54:59.101195 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:54:59.101242 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:54:59.101257 kernel: BIOS-provided physical RAM map: Jul 6 23:54:59.101267 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 6 23:54:59.101277 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 6 23:54:59.101287 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jul 6 23:54:59.101299 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jul 6 23:54:59.101313 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jul 6 23:54:59.101324 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 6 23:54:59.101334 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 6 23:54:59.101344 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 6 23:54:59.101354 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 6 23:54:59.101364 kernel: printk: bootconsole [earlyser0] enabled Jul 6 23:54:59.101375 kernel: NX (Execute Disable) protection: active Jul 6 23:54:59.101390 kernel: APIC: Static calls initialized Jul 6 23:54:59.101402 kernel: efi: EFI v2.7 by Microsoft Jul 6 23:54:59.101414 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 Jul 6 23:54:59.101425 kernel: SMBIOS 3.1.0 present. Jul 6 23:54:59.101437 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jul 6 23:54:59.101448 kernel: Hypervisor detected: Microsoft Hyper-V Jul 6 23:54:59.101459 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jul 6 23:54:59.101471 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Jul 6 23:54:59.101482 kernel: Hyper-V: Nested features: 0x1e0101 Jul 6 23:54:59.101494 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 6 23:54:59.101507 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 6 23:54:59.101519 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 6 23:54:59.101531 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 6 23:54:59.101544 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jul 6 23:54:59.101555 kernel: tsc: Detected 2593.906 MHz processor Jul 6 23:54:59.101567 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:54:59.101579 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:54:59.101591 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jul 6 23:54:59.101603 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 6 23:54:59.101617 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:54:59.101629 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jul 6 23:54:59.101641 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jul 6 23:54:59.101652 kernel: Using GB pages for direct mapping Jul 6 23:54:59.101664 kernel: Secure boot disabled Jul 6 23:54:59.101675 kernel: ACPI: Early table checksum verification disabled Jul 6 23:54:59.101687 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 6 23:54:59.101704 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:59.101719 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:59.101731 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 6 23:54:59.101744 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 6 23:54:59.101756 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:59.101769 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:59.101781 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:59.101796 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:59.101808 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:59.101821 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:59.101833 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:59.101846 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 6 23:54:59.101858 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jul 6 23:54:59.101870 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 6 23:54:59.101883 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 6 23:54:59.101898 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 6 23:54:59.101910 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 6 23:54:59.101923 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jul 6 23:54:59.101935 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jul 6 23:54:59.101948 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 6 23:54:59.101960 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jul 6 23:54:59.101973 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 6 23:54:59.101985 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 6 23:54:59.101997 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 6 23:54:59.102012 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jul 6 23:54:59.102025 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jul 6 23:54:59.102037 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 6 23:54:59.102050 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 6 23:54:59.102062 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 6 23:54:59.102075 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 6 23:54:59.102088 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 6 23:54:59.102100 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 6 23:54:59.102113 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 6 23:54:59.102127 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 6 23:54:59.102140 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 6 23:54:59.102152 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jul 6 23:54:59.102165 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jul 6 23:54:59.102177 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jul 6 23:54:59.102190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jul 6 23:54:59.102202 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jul 6 23:54:59.102215 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jul 6 23:54:59.103188 kernel: Zone ranges: Jul 6 23:54:59.103208 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:54:59.103249 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 6 23:54:59.103263 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 6 23:54:59.103276 kernel: Movable zone start for each node Jul 6 23:54:59.103290 kernel: Early memory node ranges Jul 6 23:54:59.103304 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 6 23:54:59.103318 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jul 6 23:54:59.103331 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 6 23:54:59.103345 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 6 23:54:59.103361 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 6 23:54:59.103375 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:54:59.103389 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 6 23:54:59.103402 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jul 6 23:54:59.103416 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 6 23:54:59.103429 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jul 6 23:54:59.103443 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:54:59.103456 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:54:59.103470 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:54:59.103486 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 6 23:54:59.103500 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 6 23:54:59.103513 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 6 23:54:59.103527 kernel: Booting paravirtualized kernel on Hyper-V Jul 6 23:54:59.103541 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:54:59.103555 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 6 23:54:59.103568 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 6 23:54:59.103582 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 6 23:54:59.103595 kernel: pcpu-alloc: [0] 0 1 Jul 6 23:54:59.103612 kernel: Hyper-V: PV spinlocks enabled Jul 6 23:54:59.103625 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 6 23:54:59.103640 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:54:59.103655 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:54:59.103668 kernel: random: crng init done Jul 6 23:54:59.103681 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 6 23:54:59.103695 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:54:59.103708 kernel: Fallback order for Node 0: 0 Jul 6 23:54:59.103725 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jul 6 23:54:59.103748 kernel: Policy zone: Normal Jul 6 23:54:59.103766 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:54:59.103780 kernel: software IO TLB: area num 2. Jul 6 23:54:59.103795 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 310124K reserved, 0K cma-reserved) Jul 6 23:54:59.103809 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:54:59.103824 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:54:59.103838 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:54:59.103853 kernel: Dynamic Preempt: voluntary Jul 6 23:54:59.103867 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:54:59.103883 kernel: rcu: RCU event tracing is enabled. Jul 6 23:54:59.103900 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:54:59.103915 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:54:59.103929 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:54:59.103944 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:54:59.103959 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:54:59.103975 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:54:59.103990 kernel: Using NULL legacy PIC Jul 6 23:54:59.104004 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 6 23:54:59.104019 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:54:59.104033 kernel: Console: colour dummy device 80x25 Jul 6 23:54:59.104048 kernel: printk: console [tty1] enabled Jul 6 23:54:59.104062 kernel: printk: console [ttyS0] enabled Jul 6 23:54:59.104077 kernel: printk: bootconsole [earlyser0] disabled Jul 6 23:54:59.104091 kernel: ACPI: Core revision 20230628 Jul 6 23:54:59.104105 kernel: Failed to register legacy timer interrupt Jul 6 23:54:59.104122 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:54:59.104136 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 6 23:54:59.104151 kernel: Hyper-V: Using IPI hypercalls Jul 6 23:54:59.104165 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jul 6 23:54:59.104179 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jul 6 23:54:59.104192 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jul 6 23:54:59.104206 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jul 6 23:54:59.104239 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jul 6 23:54:59.104266 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jul 6 23:54:59.104294 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jul 6 23:54:59.104307 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 6 23:54:59.104320 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 6 23:54:59.104334 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:54:59.104348 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:54:59.104362 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:54:59.104373 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 6 23:54:59.104386 kernel: RETBleed: Vulnerable Jul 6 23:54:59.104401 kernel: Speculative Store Bypass: Vulnerable Jul 6 23:54:59.104418 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:54:59.104431 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:54:59.104442 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 6 23:54:59.104455 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:54:59.104469 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:54:59.104481 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:54:59.104499 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 6 23:54:59.104516 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 6 23:54:59.104529 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 6 23:54:59.104540 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:54:59.104554 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 6 23:54:59.104571 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 6 23:54:59.104583 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 6 23:54:59.104595 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jul 6 23:54:59.104609 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:54:59.104622 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:54:59.104635 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:54:59.104650 kernel: landlock: Up and running. Jul 6 23:54:59.104662 kernel: SELinux: Initializing. Jul 6 23:54:59.104676 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:54:59.104691 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:54:59.104706 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 6 23:54:59.104721 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:54:59.104739 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:54:59.104754 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:54:59.104769 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 6 23:54:59.104784 kernel: signal: max sigframe size: 3632 Jul 6 23:54:59.104798 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:54:59.104814 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:54:59.104828 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 6 23:54:59.104843 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:54:59.104858 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:54:59.104875 kernel: .... node #0, CPUs: #1 Jul 6 23:54:59.104890 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jul 6 23:54:59.104907 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 6 23:54:59.104922 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:54:59.104936 kernel: smpboot: Max logical packages: 1 Jul 6 23:54:59.104950 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jul 6 23:54:59.104965 kernel: devtmpfs: initialized Jul 6 23:54:59.104980 kernel: x86/mm: Memory block size: 128MB Jul 6 23:54:59.104998 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 6 23:54:59.105014 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:54:59.105027 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:54:59.105040 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:54:59.105055 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:54:59.105069 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:54:59.105083 kernel: audit: type=2000 audit(1751846098.030:1): state=initialized audit_enabled=0 res=1 Jul 6 23:54:59.105097 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:54:59.105111 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:54:59.105129 kernel: cpuidle: using governor menu Jul 6 23:54:59.105143 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:54:59.105157 kernel: dca service started, version 1.12.1 Jul 6 23:54:59.105171 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jul 6 23:54:59.105186 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:54:59.105200 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:54:59.105213 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:54:59.105238 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:54:59.105253 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:54:59.105269 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:54:59.105284 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:54:59.105298 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:54:59.105312 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:54:59.105326 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:54:59.105339 kernel: ACPI: Interpreter enabled Jul 6 23:54:59.105354 kernel: ACPI: PM: (supports S0 S5) Jul 6 23:54:59.105368 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:54:59.105381 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:54:59.105398 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 6 23:54:59.105412 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 6 23:54:59.105426 kernel: iommu: Default domain type: Translated Jul 6 23:54:59.105440 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:54:59.105453 kernel: efivars: Registered efivars operations Jul 6 23:54:59.105467 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:54:59.105481 kernel: PCI: System does not support PCI Jul 6 23:54:59.105494 kernel: vgaarb: loaded Jul 6 23:54:59.105508 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jul 6 23:54:59.105525 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:54:59.105538 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:54:59.105552 kernel: pnp: PnP ACPI init Jul 6 23:54:59.105566 kernel: pnp: PnP ACPI: found 3 devices Jul 6 23:54:59.105580 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:54:59.105594 kernel: NET: Registered PF_INET protocol family Jul 6 23:54:59.105608 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 6 23:54:59.105622 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 6 23:54:59.105636 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:54:59.105653 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:54:59.105669 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 6 23:54:59.105683 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 6 23:54:59.105697 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 6 23:54:59.105711 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 6 23:54:59.105724 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:54:59.105741 kernel: NET: Registered PF_XDP protocol family Jul 6 23:54:59.105755 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:54:59.105769 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 6 23:54:59.105785 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Jul 6 23:54:59.105799 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 6 23:54:59.105813 kernel: Initialise system trusted keyrings Jul 6 23:54:59.105826 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 6 23:54:59.105840 kernel: Key type asymmetric registered Jul 6 23:54:59.105854 kernel: Asymmetric key parser 'x509' registered Jul 6 23:54:59.105867 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:54:59.105883 kernel: io scheduler mq-deadline registered Jul 6 23:54:59.105897 kernel: io scheduler kyber registered Jul 6 23:54:59.105913 kernel: io scheduler bfq registered Jul 6 23:54:59.105926 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:54:59.105939 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:54:59.105953 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:54:59.105967 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 6 23:54:59.105980 kernel: i8042: PNP: No PS/2 controller found. Jul 6 23:54:59.109033 kernel: rtc_cmos 00:02: registered as rtc0 Jul 6 23:54:59.109158 kernel: rtc_cmos 00:02: setting system clock to 2025-07-06T23:54:58 UTC (1751846098) Jul 6 23:54:59.109299 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 6 23:54:59.109321 kernel: intel_pstate: CPU model not supported Jul 6 23:54:59.109336 kernel: efifb: probing for efifb Jul 6 23:54:59.109352 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 6 23:54:59.109367 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 6 23:54:59.109382 kernel: efifb: scrolling: redraw Jul 6 23:54:59.109398 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 6 23:54:59.109412 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:54:59.109430 kernel: fb0: EFI VGA frame buffer device Jul 6 23:54:59.109442 kernel: pstore: Using crash dump compression: deflate Jul 6 23:54:59.109455 kernel: pstore: Registered efi_pstore as persistent store backend Jul 6 23:54:59.109468 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:54:59.109484 kernel: Segment Routing with IPv6 Jul 6 23:54:59.109498 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:54:59.109511 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:54:59.109527 kernel: Key type dns_resolver registered Jul 6 23:54:59.109541 kernel: IPI shorthand broadcast: enabled Jul 6 23:54:59.109562 kernel: sched_clock: Marking stable (891003600, 49823300)->(1156273200, -215446300) Jul 6 23:54:59.109582 kernel: registered taskstats version 1 Jul 6 23:54:59.109598 kernel: Loading compiled-in X.509 certificates Jul 6 23:54:59.109614 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:54:59.109629 kernel: Key type .fscrypt registered Jul 6 23:54:59.109645 kernel: Key type fscrypt-provisioning registered Jul 6 23:54:59.109661 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:54:59.109677 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:54:59.109690 kernel: ima: No architecture policies found Jul 6 23:54:59.109710 kernel: clk: Disabling unused clocks Jul 6 23:54:59.109725 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:54:59.109739 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:54:59.109752 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:54:59.109765 kernel: Run /init as init process Jul 6 23:54:59.109780 kernel: with arguments: Jul 6 23:54:59.109794 kernel: /init Jul 6 23:54:59.109806 kernel: with environment: Jul 6 23:54:59.109819 kernel: HOME=/ Jul 6 23:54:59.109834 kernel: TERM=linux Jul 6 23:54:59.109847 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:54:59.109859 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:54:59.109871 systemd[1]: Detected virtualization microsoft. Jul 6 23:54:59.109883 systemd[1]: Detected architecture x86-64. Jul 6 23:54:59.109893 systemd[1]: Running in initrd. Jul 6 23:54:59.109903 systemd[1]: No hostname configured, using default hostname. Jul 6 23:54:59.109911 systemd[1]: Hostname set to . Jul 6 23:54:59.109925 systemd[1]: Initializing machine ID from random generator. Jul 6 23:54:59.109933 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:54:59.109944 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:54:59.109954 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:54:59.109963 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:54:59.109973 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:54:59.109983 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:54:59.109992 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:54:59.110007 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:54:59.110016 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:54:59.110024 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:54:59.110033 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:54:59.110041 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:54:59.110050 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:54:59.110062 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:54:59.110073 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:54:59.110081 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:54:59.110090 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:54:59.110099 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:54:59.110110 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:54:59.110118 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:54:59.110127 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:54:59.110139 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:54:59.110149 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:54:59.110161 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:54:59.110170 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:54:59.110180 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:54:59.110190 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:54:59.110198 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:54:59.110207 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:54:59.110250 systemd-journald[176]: Collecting audit messages is disabled. Jul 6 23:54:59.110277 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:59.110289 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:54:59.110298 systemd-journald[176]: Journal started Jul 6 23:54:59.110322 systemd-journald[176]: Runtime Journal (/run/log/journal/6298148220d945d0b2ccb82d59aade98) is 8.0M, max 158.8M, 150.8M free. Jul 6 23:54:59.104532 systemd-modules-load[177]: Inserted module 'overlay' Jul 6 23:54:59.123247 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:54:59.126806 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:54:59.133931 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:54:59.151445 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:54:59.165124 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:54:59.165158 kernel: Bridge firewalling registered Jul 6 23:54:59.165055 systemd-modules-load[177]: Inserted module 'br_netfilter' Jul 6 23:54:59.167498 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:54:59.177882 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:54:59.184446 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:59.187906 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:54:59.196752 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:54:59.211384 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:54:59.220375 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:54:59.225593 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:54:59.239174 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:54:59.242724 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:54:59.252541 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:54:59.269169 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:54:59.283405 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:54:59.301518 systemd-resolved[203]: Positive Trust Anchors: Jul 6 23:54:59.308911 dracut-cmdline[213]: dracut-dracut-053 Jul 6 23:54:59.308911 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:54:59.301535 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:54:59.301589 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:54:59.305365 systemd-resolved[203]: Defaulting to hostname 'linux'. Jul 6 23:54:59.311351 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:54:59.314564 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:54:59.379239 kernel: SCSI subsystem initialized Jul 6 23:54:59.389237 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:54:59.401244 kernel: iscsi: registered transport (tcp) Jul 6 23:54:59.422283 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:54:59.422367 kernel: QLogic iSCSI HBA Driver Jul 6 23:54:59.458680 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:54:59.467388 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:54:59.496760 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:54:59.496862 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:54:59.500445 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:54:59.541246 kernel: raid6: avx512x4 gen() 18750 MB/s Jul 6 23:54:59.560233 kernel: raid6: avx512x2 gen() 18676 MB/s Jul 6 23:54:59.579229 kernel: raid6: avx512x1 gen() 18747 MB/s Jul 6 23:54:59.598234 kernel: raid6: avx2x4 gen() 18544 MB/s Jul 6 23:54:59.617228 kernel: raid6: avx2x2 gen() 18650 MB/s Jul 6 23:54:59.637117 kernel: raid6: avx2x1 gen() 14185 MB/s Jul 6 23:54:59.637159 kernel: raid6: using algorithm avx512x4 gen() 18750 MB/s Jul 6 23:54:59.659071 kernel: raid6: .... xor() 8283 MB/s, rmw enabled Jul 6 23:54:59.659103 kernel: raid6: using avx512x2 recovery algorithm Jul 6 23:54:59.681239 kernel: xor: automatically using best checksumming function avx Jul 6 23:54:59.827245 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:54:59.837245 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:54:59.846398 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:54:59.859584 systemd-udevd[395]: Using default interface naming scheme 'v255'. Jul 6 23:54:59.864038 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:54:59.874470 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:54:59.897394 dracut-pre-trigger[397]: rd.md=0: removing MD RAID activation Jul 6 23:54:59.927552 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:54:59.935467 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:54:59.978321 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:54:59.989386 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:55:00.021797 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:55:00.028588 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:55:00.035444 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:55:00.041848 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:55:00.051365 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:55:00.066340 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:55:00.079679 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:55:00.087745 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:55:00.087775 kernel: AES CTR mode by8 optimization enabled Jul 6 23:55:00.115238 kernel: hv_vmbus: Vmbus version:5.2 Jul 6 23:55:00.136797 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:55:00.140337 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:00.147366 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:55:00.160480 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:55:00.170103 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 6 23:55:00.170134 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 6 23:55:00.160817 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:00.180855 kernel: PTP clock support registered Jul 6 23:55:00.177961 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:00.189720 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:00.197306 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 6 23:55:00.200456 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:55:00.200579 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:00.221261 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 6 23:55:00.223370 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:00.237245 kernel: hv_vmbus: registering driver hv_netvsc Jul 6 23:55:00.240245 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:55:00.246990 kernel: hv_utils: Registering HyperV Utility Driver Jul 6 23:55:00.247027 kernel: hv_vmbus: registering driver hv_utils Jul 6 23:55:00.247042 kernel: hv_vmbus: registering driver hv_storvsc Jul 6 23:55:00.253004 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:00.259365 kernel: hv_utils: Heartbeat IC version 3.0 Jul 6 23:55:00.259397 kernel: hv_utils: Shutdown IC version 3.2 Jul 6 23:55:00.259416 kernel: scsi host0: storvsc_host_t Jul 6 23:55:00.270441 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 6 23:55:00.270541 kernel: hv_utils: TimeSync IC version 4.0 Jul 6 23:55:00.270567 kernel: scsi host1: storvsc_host_t Jul 6 23:55:01.594894 systemd-resolved[203]: Clock change detected. Flushing caches. Jul 6 23:55:01.603401 kernel: hv_vmbus: registering driver hid_hyperv Jul 6 23:55:01.603446 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 6 23:55:01.603490 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 6 23:55:01.604193 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:55:01.621161 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 6 23:55:01.648519 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:01.660873 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 6 23:55:01.661275 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:55:01.665132 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 6 23:55:01.667653 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 6 23:55:01.667807 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 6 23:55:01.667944 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 6 23:55:01.668095 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 6 23:55:01.670039 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 6 23:55:01.679706 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:01.679768 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 6 23:55:01.823232 kernel: hv_netvsc 7ced8d4a-4564-7ced-8d4a-45647ced8d4a eth0: VF slot 1 added Jul 6 23:55:01.832144 kernel: hv_vmbus: registering driver hv_pci Jul 6 23:55:01.838058 kernel: hv_pci 72ddc379-ba26-4c04-ac12-5393d8f0638b: PCI VMBus probing: Using version 0x10004 Jul 6 23:55:01.843037 kernel: hv_pci 72ddc379-ba26-4c04-ac12-5393d8f0638b: PCI host bridge to bus ba26:00 Jul 6 23:55:01.843216 kernel: pci_bus ba26:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jul 6 23:55:01.848894 kernel: pci_bus ba26:00: No busn resource found for root bus, will use [bus 00-ff] Jul 6 23:55:01.854163 kernel: pci ba26:00:02.0: [15b3:1016] type 00 class 0x020000 Jul 6 23:55:01.859087 kernel: pci ba26:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 6 23:55:01.863193 kernel: pci ba26:00:02.0: enabling Extended Tags Jul 6 23:55:01.874292 kernel: pci ba26:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ba26:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 6 23:55:01.880657 kernel: pci_bus ba26:00: busn_res: [bus 00-ff] end is updated to 00 Jul 6 23:55:01.880999 kernel: pci ba26:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 6 23:55:02.054512 kernel: mlx5_core ba26:00:02.0: enabling device (0000 -> 0002) Jul 6 23:55:02.060054 kernel: mlx5_core ba26:00:02.0: firmware version: 14.30.5000 Jul 6 23:55:02.170510 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 6 23:55:02.248046 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (458) Jul 6 23:55:02.264357 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:55:02.276253 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (449) Jul 6 23:55:02.282544 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 6 23:55:02.291198 kernel: hv_netvsc 7ced8d4a-4564-7ced-8d4a-45647ced8d4a eth0: VF registering: eth1 Jul 6 23:55:02.294449 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 6 23:55:02.299500 kernel: mlx5_core ba26:00:02.0 eth1: joined to eth0 Jul 6 23:55:02.299736 kernel: mlx5_core ba26:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 6 23:55:02.306542 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 6 23:55:02.322480 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:55:02.326145 kernel: mlx5_core ba26:00:02.0 enP47654s1: renamed from eth1 Jul 6 23:55:03.346960 disk-uuid[604]: The operation has completed successfully. Jul 6 23:55:03.350399 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:03.425736 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:55:03.425855 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:55:03.451180 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:55:03.457658 sh[693]: Success Jul 6 23:55:03.488263 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 6 23:55:03.693565 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:55:03.708141 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:55:03.710854 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:55:03.747496 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:55:03.747570 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:03.751285 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:55:03.754345 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:55:03.757179 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:55:04.025384 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:55:04.031235 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:55:04.047209 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:55:04.054145 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:55:04.066043 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:04.066084 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:04.071169 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:04.104552 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:55:04.114226 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:55:04.120948 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:04.126189 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:55:04.139181 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:55:04.168389 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:55:04.178248 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:55:04.198092 systemd-networkd[877]: lo: Link UP Jul 6 23:55:04.198102 systemd-networkd[877]: lo: Gained carrier Jul 6 23:55:04.200392 systemd-networkd[877]: Enumeration completed Jul 6 23:55:04.200678 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:55:04.203715 systemd[1]: Reached target network.target - Network. Jul 6 23:55:04.205144 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:04.205149 systemd-networkd[877]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:55:04.266043 kernel: mlx5_core ba26:00:02.0 enP47654s1: Link up Jul 6 23:55:04.295053 kernel: hv_netvsc 7ced8d4a-4564-7ced-8d4a-45647ced8d4a eth0: Data path switched to VF: enP47654s1 Jul 6 23:55:04.295933 systemd-networkd[877]: enP47654s1: Link UP Jul 6 23:55:04.296082 systemd-networkd[877]: eth0: Link UP Jul 6 23:55:04.296292 systemd-networkd[877]: eth0: Gained carrier Jul 6 23:55:04.296305 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:04.308073 systemd-networkd[877]: enP47654s1: Gained carrier Jul 6 23:55:04.347102 systemd-networkd[877]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 6 23:55:04.862896 ignition[828]: Ignition 2.19.0 Jul 6 23:55:04.862908 ignition[828]: Stage: fetch-offline Jul 6 23:55:04.862950 ignition[828]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:04.867590 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:55:04.862961 ignition[828]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:04.863092 ignition[828]: parsed url from cmdline: "" Jul 6 23:55:04.863097 ignition[828]: no config URL provided Jul 6 23:55:04.863104 ignition[828]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:55:04.863114 ignition[828]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:55:04.863121 ignition[828]: failed to fetch config: resource requires networking Jul 6 23:55:04.884332 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:55:04.864932 ignition[828]: Ignition finished successfully Jul 6 23:55:04.903984 ignition[886]: Ignition 2.19.0 Jul 6 23:55:04.903995 ignition[886]: Stage: fetch Jul 6 23:55:04.904218 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:04.904231 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:04.904323 ignition[886]: parsed url from cmdline: "" Jul 6 23:55:04.904328 ignition[886]: no config URL provided Jul 6 23:55:04.904335 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:55:04.904343 ignition[886]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:55:04.904363 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 6 23:55:04.999391 ignition[886]: GET result: OK Jul 6 23:55:04.999500 ignition[886]: config has been read from IMDS userdata Jul 6 23:55:04.999541 ignition[886]: parsing config with SHA512: 8afefc7ad14204c829da7a39b2b3286d5724a2cc1ba7bd5da298c7f4fd80dae8a059264e4ab169514585e4797ce43c69bf272fd6e5a637cbc3c11b3270a2602c Jul 6 23:55:05.004384 unknown[886]: fetched base config from "system" Jul 6 23:55:05.004397 unknown[886]: fetched base config from "system" Jul 6 23:55:05.004938 ignition[886]: fetch: fetch complete Jul 6 23:55:05.004406 unknown[886]: fetched user config from "azure" Jul 6 23:55:05.004944 ignition[886]: fetch: fetch passed Jul 6 23:55:05.006962 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:55:05.004992 ignition[886]: Ignition finished successfully Jul 6 23:55:05.024255 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:55:05.041874 ignition[892]: Ignition 2.19.0 Jul 6 23:55:05.041885 ignition[892]: Stage: kargs Jul 6 23:55:05.042115 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:05.042128 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:05.042993 ignition[892]: kargs: kargs passed Jul 6 23:55:05.043051 ignition[892]: Ignition finished successfully Jul 6 23:55:05.054650 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:55:05.064178 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:55:05.082610 ignition[898]: Ignition 2.19.0 Jul 6 23:55:05.082620 ignition[898]: Stage: disks Jul 6 23:55:05.084607 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:55:05.082854 ignition[898]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:05.088331 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:55:05.082873 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:05.092224 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:55:05.083753 ignition[898]: disks: disks passed Jul 6 23:55:05.095607 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:55:05.083795 ignition[898]: Ignition finished successfully Jul 6 23:55:05.101431 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:55:05.106644 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:55:05.122791 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:55:05.188648 systemd-fsck[906]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 6 23:55:05.193213 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:55:05.203252 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:55:05.298040 kernel: EXT4-fs (sda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:55:05.298453 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:55:05.303473 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:55:05.342152 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:55:05.346806 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:55:05.354212 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:55:05.361037 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (917) Jul 6 23:55:05.366342 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:55:05.367448 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:55:05.379040 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:05.379075 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:05.379088 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:05.384037 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:55:05.394694 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:55:05.401307 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:55:05.406170 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:55:05.941699 coreos-metadata[919]: Jul 06 23:55:05.941 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:55:05.948195 coreos-metadata[919]: Jul 06 23:55:05.948 INFO Fetch successful Jul 6 23:55:05.951070 coreos-metadata[919]: Jul 06 23:55:05.948 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:55:05.958632 coreos-metadata[919]: Jul 06 23:55:05.958 INFO Fetch successful Jul 6 23:55:05.976877 coreos-metadata[919]: Jul 06 23:55:05.974 INFO wrote hostname ci-4081.3.4-a-2f8c6d8615 to /sysroot/etc/hostname Jul 6 23:55:05.979283 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:55:05.995508 initrd-setup-root[946]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:55:06.052661 initrd-setup-root[953]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:55:06.075518 initrd-setup-root[960]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:55:06.084752 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:55:06.115191 systemd-networkd[877]: eth0: Gained IPv6LL Jul 6 23:55:06.179271 systemd-networkd[877]: enP47654s1: Gained IPv6LL Jul 6 23:55:06.948936 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:55:06.958247 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:55:06.970187 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:55:06.975879 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:06.976837 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:55:07.000044 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:55:07.009814 ignition[1034]: INFO : Ignition 2.19.0 Jul 6 23:55:07.009814 ignition[1034]: INFO : Stage: mount Jul 6 23:55:07.017143 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:07.017143 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:07.017143 ignition[1034]: INFO : mount: mount passed Jul 6 23:55:07.017143 ignition[1034]: INFO : Ignition finished successfully Jul 6 23:55:07.011873 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:55:07.028049 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:55:07.044244 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:55:07.056050 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1047) Jul 6 23:55:07.063222 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:07.063291 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:07.065891 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:07.071421 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:55:07.072867 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:55:07.105632 ignition[1064]: INFO : Ignition 2.19.0 Jul 6 23:55:07.108023 ignition[1064]: INFO : Stage: files Jul 6 23:55:07.108023 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:07.108023 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:07.115732 ignition[1064]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:55:07.125211 ignition[1064]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:55:07.128794 ignition[1064]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:55:07.203394 ignition[1064]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:55:07.208242 ignition[1064]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:55:07.208242 ignition[1064]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:55:07.203940 unknown[1064]: wrote ssh authorized keys file for user: core Jul 6 23:55:07.218959 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 6 23:55:07.224461 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 6 23:55:07.515490 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:55:07.864917 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 6 23:55:07.864917 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 6 23:55:08.672545 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:55:08.995485 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:55:08.995485 ignition[1064]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:55:09.010974 ignition[1064]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:55:09.019211 ignition[1064]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:55:09.019211 ignition[1064]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:55:09.019211 ignition[1064]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:55:09.019211 ignition[1064]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:55:09.019211 ignition[1064]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:55:09.019211 ignition[1064]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:55:09.019211 ignition[1064]: INFO : files: files passed Jul 6 23:55:09.019211 ignition[1064]: INFO : Ignition finished successfully Jul 6 23:55:09.012952 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:55:09.040294 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:55:09.062283 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:55:09.069207 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:55:09.069328 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:55:09.097578 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:09.097578 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:09.108176 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:09.114050 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:55:09.115381 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:55:09.128293 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:55:09.174541 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:55:09.174657 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:55:09.181200 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:55:09.186872 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:55:09.190067 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:55:09.200311 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:55:09.213494 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:55:09.222208 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:55:09.234731 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:55:09.241689 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:55:09.251274 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:55:09.253904 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:55:09.254046 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:55:09.260434 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:55:09.265137 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:55:09.270631 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:55:09.276014 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:55:09.281627 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:55:09.290500 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:55:09.298597 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:55:09.299740 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:55:09.300645 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:55:09.301096 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:55:09.301509 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:55:09.301660 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:55:09.302466 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:55:09.302936 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:55:09.303350 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:55:09.318198 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:55:09.324963 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:55:09.325139 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:55:09.331107 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:55:09.395854 ignition[1117]: INFO : Ignition 2.19.0 Jul 6 23:55:09.395854 ignition[1117]: INFO : Stage: umount Jul 6 23:55:09.395854 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:09.395854 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:09.395854 ignition[1117]: INFO : umount: umount passed Jul 6 23:55:09.395854 ignition[1117]: INFO : Ignition finished successfully Jul 6 23:55:09.331267 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:55:09.341083 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:55:09.341212 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:55:09.346626 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:55:09.346766 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:55:09.368131 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:55:09.371535 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:55:09.371752 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:55:09.377887 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:55:09.381431 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:55:09.381623 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:55:09.385086 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:55:09.385239 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:55:09.393107 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:55:09.393198 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:55:09.397464 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:55:09.397556 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:55:09.399428 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:55:09.399475 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:55:09.407611 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:55:09.407661 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:55:09.409081 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:55:09.409136 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:55:09.409456 systemd[1]: Stopped target network.target - Network. Jul 6 23:55:09.422451 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:55:09.422520 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:55:09.425996 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:55:09.435123 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:55:09.437515 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:55:09.441027 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:55:09.443517 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:55:09.451432 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:55:09.451490 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:55:09.460324 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:55:09.460387 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:55:09.465691 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:55:09.468815 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:55:09.545169 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:55:09.545259 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:55:09.553783 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:55:09.555852 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:55:09.558132 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:55:09.569672 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:55:09.569799 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:55:09.570325 systemd-networkd[877]: eth0: DHCPv6 lease lost Jul 6 23:55:09.577230 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:55:09.577377 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:55:09.581939 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:55:09.582007 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:55:09.603172 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:55:09.608561 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:55:09.608645 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:55:09.618258 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:55:09.618329 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:55:09.623848 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:55:09.623900 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:55:09.626128 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:55:09.626172 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:55:09.632642 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:55:09.656384 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:55:09.659048 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:55:09.666686 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:55:09.666768 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:55:09.675253 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:55:09.675308 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:55:09.676295 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:55:09.676343 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:55:09.703445 kernel: hv_netvsc 7ced8d4a-4564-7ced-8d4a-45647ced8d4a eth0: Data path switched from VF: enP47654s1 Jul 6 23:55:09.677241 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:55:09.677280 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:55:09.678108 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:55:09.678148 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:09.696302 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:55:09.706506 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:55:09.706599 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:55:09.710291 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:55:09.710344 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:09.724943 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:55:09.725062 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:55:09.749772 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:55:09.749911 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:55:10.104872 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:55:10.105006 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:55:10.112559 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:55:10.118175 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:55:10.118247 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:55:10.131201 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:55:10.138914 systemd[1]: Switching root. Jul 6 23:55:10.224750 systemd-journald[176]: Journal stopped Jul 6 23:54:59.101195 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:54:59.101242 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:54:59.101257 kernel: BIOS-provided physical RAM map: Jul 6 23:54:59.101267 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 6 23:54:59.101277 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 6 23:54:59.101287 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jul 6 23:54:59.101299 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jul 6 23:54:59.101313 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jul 6 23:54:59.101324 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 6 23:54:59.101334 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 6 23:54:59.101344 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 6 23:54:59.101354 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 6 23:54:59.101364 kernel: printk: bootconsole [earlyser0] enabled Jul 6 23:54:59.101375 kernel: NX (Execute Disable) protection: active Jul 6 23:54:59.101390 kernel: APIC: Static calls initialized Jul 6 23:54:59.101402 kernel: efi: EFI v2.7 by Microsoft Jul 6 23:54:59.101414 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 Jul 6 23:54:59.101425 kernel: SMBIOS 3.1.0 present. Jul 6 23:54:59.101437 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jul 6 23:54:59.101448 kernel: Hypervisor detected: Microsoft Hyper-V Jul 6 23:54:59.101459 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jul 6 23:54:59.101471 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Jul 6 23:54:59.101482 kernel: Hyper-V: Nested features: 0x1e0101 Jul 6 23:54:59.101494 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 6 23:54:59.101507 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 6 23:54:59.101519 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 6 23:54:59.101531 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 6 23:54:59.101544 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jul 6 23:54:59.101555 kernel: tsc: Detected 2593.906 MHz processor Jul 6 23:54:59.101567 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:54:59.101579 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:54:59.101591 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jul 6 23:54:59.101603 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 6 23:54:59.101617 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:54:59.101629 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jul 6 23:54:59.101641 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jul 6 23:54:59.101652 kernel: Using GB pages for direct mapping Jul 6 23:54:59.101664 kernel: Secure boot disabled Jul 6 23:54:59.101675 kernel: ACPI: Early table checksum verification disabled Jul 6 23:54:59.101687 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 6 23:54:59.101704 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:59.101719 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:59.101731 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 6 23:54:59.101744 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 6 23:54:59.101756 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:59.101769 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:59.101781 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:59.101796 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:59.101808 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:59.101821 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:59.101833 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:54:59.101846 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 6 23:54:59.101858 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jul 6 23:54:59.101870 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 6 23:54:59.101883 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 6 23:54:59.101898 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 6 23:54:59.101910 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 6 23:54:59.101923 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jul 6 23:54:59.101935 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jul 6 23:54:59.101948 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 6 23:54:59.101960 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jul 6 23:54:59.101973 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 6 23:54:59.101985 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 6 23:54:59.101997 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 6 23:54:59.102012 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jul 6 23:54:59.102025 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jul 6 23:54:59.102037 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 6 23:54:59.102050 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 6 23:54:59.102062 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 6 23:54:59.102075 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 6 23:54:59.102088 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 6 23:54:59.102100 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 6 23:54:59.102113 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 6 23:54:59.102127 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 6 23:54:59.102140 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 6 23:54:59.102152 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jul 6 23:54:59.102165 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jul 6 23:54:59.102177 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jul 6 23:54:59.102190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jul 6 23:54:59.102202 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jul 6 23:54:59.102215 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jul 6 23:54:59.103188 kernel: Zone ranges: Jul 6 23:54:59.103208 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:54:59.103249 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 6 23:54:59.103263 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 6 23:54:59.103276 kernel: Movable zone start for each node Jul 6 23:54:59.103290 kernel: Early memory node ranges Jul 6 23:54:59.103304 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 6 23:54:59.103318 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jul 6 23:54:59.103331 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 6 23:54:59.103345 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 6 23:54:59.103361 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 6 23:54:59.103375 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:54:59.103389 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 6 23:54:59.103402 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jul 6 23:54:59.103416 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 6 23:54:59.103429 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jul 6 23:54:59.103443 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:54:59.103456 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:54:59.103470 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:54:59.103486 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 6 23:54:59.103500 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 6 23:54:59.103513 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 6 23:54:59.103527 kernel: Booting paravirtualized kernel on Hyper-V Jul 6 23:54:59.103541 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:54:59.103555 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 6 23:54:59.103568 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 6 23:54:59.103582 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 6 23:54:59.103595 kernel: pcpu-alloc: [0] 0 1 Jul 6 23:54:59.103612 kernel: Hyper-V: PV spinlocks enabled Jul 6 23:54:59.103625 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 6 23:54:59.103640 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:54:59.103655 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:54:59.103668 kernel: random: crng init done Jul 6 23:54:59.103681 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 6 23:54:59.103695 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:54:59.103708 kernel: Fallback order for Node 0: 0 Jul 6 23:54:59.103725 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jul 6 23:54:59.103748 kernel: Policy zone: Normal Jul 6 23:54:59.103766 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:54:59.103780 kernel: software IO TLB: area num 2. Jul 6 23:54:59.103795 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 310124K reserved, 0K cma-reserved) Jul 6 23:54:59.103809 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:54:59.103824 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:54:59.103838 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:54:59.103853 kernel: Dynamic Preempt: voluntary Jul 6 23:54:59.103867 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:54:59.103883 kernel: rcu: RCU event tracing is enabled. Jul 6 23:54:59.103900 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:54:59.103915 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:54:59.103929 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:54:59.103944 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:54:59.103959 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:54:59.103975 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:54:59.103990 kernel: Using NULL legacy PIC Jul 6 23:54:59.104004 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 6 23:54:59.104019 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:54:59.104033 kernel: Console: colour dummy device 80x25 Jul 6 23:54:59.104048 kernel: printk: console [tty1] enabled Jul 6 23:54:59.104062 kernel: printk: console [ttyS0] enabled Jul 6 23:54:59.104077 kernel: printk: bootconsole [earlyser0] disabled Jul 6 23:54:59.104091 kernel: ACPI: Core revision 20230628 Jul 6 23:54:59.104105 kernel: Failed to register legacy timer interrupt Jul 6 23:54:59.104122 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:54:59.104136 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 6 23:54:59.104151 kernel: Hyper-V: Using IPI hypercalls Jul 6 23:54:59.104165 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jul 6 23:54:59.104179 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jul 6 23:54:59.104192 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jul 6 23:54:59.104206 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jul 6 23:54:59.104239 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jul 6 23:54:59.104266 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jul 6 23:54:59.104294 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jul 6 23:54:59.104307 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 6 23:54:59.104320 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 6 23:54:59.104334 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:54:59.104348 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:54:59.104362 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:54:59.104373 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 6 23:54:59.104386 kernel: RETBleed: Vulnerable Jul 6 23:54:59.104401 kernel: Speculative Store Bypass: Vulnerable Jul 6 23:54:59.104418 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:54:59.104431 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:54:59.104442 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 6 23:54:59.104455 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:54:59.104469 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:54:59.104481 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:54:59.104499 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 6 23:54:59.104516 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 6 23:54:59.104529 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 6 23:54:59.104540 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:54:59.104554 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 6 23:54:59.104571 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 6 23:54:59.104583 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 6 23:54:59.104595 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jul 6 23:54:59.104609 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:54:59.104622 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:54:59.104635 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:54:59.104650 kernel: landlock: Up and running. Jul 6 23:54:59.104662 kernel: SELinux: Initializing. Jul 6 23:54:59.104676 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:54:59.104691 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:54:59.104706 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 6 23:54:59.104721 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:54:59.104739 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:54:59.104754 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:54:59.104769 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 6 23:54:59.104784 kernel: signal: max sigframe size: 3632 Jul 6 23:54:59.104798 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:54:59.104814 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:54:59.104828 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 6 23:54:59.104843 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:54:59.104858 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:54:59.104875 kernel: .... node #0, CPUs: #1 Jul 6 23:54:59.104890 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jul 6 23:54:59.104907 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 6 23:54:59.104922 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:54:59.104936 kernel: smpboot: Max logical packages: 1 Jul 6 23:54:59.104950 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jul 6 23:54:59.104965 kernel: devtmpfs: initialized Jul 6 23:54:59.104980 kernel: x86/mm: Memory block size: 128MB Jul 6 23:54:59.104998 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 6 23:54:59.105014 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:54:59.105027 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:54:59.105040 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:54:59.105055 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:54:59.105069 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:54:59.105083 kernel: audit: type=2000 audit(1751846098.030:1): state=initialized audit_enabled=0 res=1 Jul 6 23:54:59.105097 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:54:59.105111 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:54:59.105129 kernel: cpuidle: using governor menu Jul 6 23:54:59.105143 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:54:59.105157 kernel: dca service started, version 1.12.1 Jul 6 23:54:59.105171 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jul 6 23:54:59.105186 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:54:59.105200 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:54:59.105213 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:54:59.105238 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:54:59.105253 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:54:59.105269 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:54:59.105284 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:54:59.105298 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:54:59.105312 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:54:59.105326 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:54:59.105339 kernel: ACPI: Interpreter enabled Jul 6 23:54:59.105354 kernel: ACPI: PM: (supports S0 S5) Jul 6 23:54:59.105368 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:54:59.105381 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:54:59.105398 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 6 23:54:59.105412 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 6 23:54:59.105426 kernel: iommu: Default domain type: Translated Jul 6 23:54:59.105440 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:54:59.105453 kernel: efivars: Registered efivars operations Jul 6 23:54:59.105467 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:54:59.105481 kernel: PCI: System does not support PCI Jul 6 23:54:59.105494 kernel: vgaarb: loaded Jul 6 23:54:59.105508 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jul 6 23:54:59.105525 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:54:59.105538 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:54:59.105552 kernel: pnp: PnP ACPI init Jul 6 23:54:59.105566 kernel: pnp: PnP ACPI: found 3 devices Jul 6 23:54:59.105580 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:54:59.105594 kernel: NET: Registered PF_INET protocol family Jul 6 23:54:59.105608 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 6 23:54:59.105622 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 6 23:54:59.105636 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:54:59.105653 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:54:59.105669 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 6 23:54:59.105683 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 6 23:54:59.105697 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 6 23:54:59.105711 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 6 23:54:59.105724 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:54:59.105741 kernel: NET: Registered PF_XDP protocol family Jul 6 23:54:59.105755 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:54:59.105769 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 6 23:54:59.105785 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Jul 6 23:54:59.105799 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 6 23:54:59.105813 kernel: Initialise system trusted keyrings Jul 6 23:54:59.105826 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 6 23:54:59.105840 kernel: Key type asymmetric registered Jul 6 23:54:59.105854 kernel: Asymmetric key parser 'x509' registered Jul 6 23:54:59.105867 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:54:59.105883 kernel: io scheduler mq-deadline registered Jul 6 23:54:59.105897 kernel: io scheduler kyber registered Jul 6 23:54:59.105913 kernel: io scheduler bfq registered Jul 6 23:54:59.105926 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:54:59.105939 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:54:59.105953 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:54:59.105967 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 6 23:54:59.105980 kernel: i8042: PNP: No PS/2 controller found. Jul 6 23:54:59.109033 kernel: rtc_cmos 00:02: registered as rtc0 Jul 6 23:54:59.109158 kernel: rtc_cmos 00:02: setting system clock to 2025-07-06T23:54:58 UTC (1751846098) Jul 6 23:54:59.109299 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 6 23:54:59.109321 kernel: intel_pstate: CPU model not supported Jul 6 23:54:59.109336 kernel: efifb: probing for efifb Jul 6 23:54:59.109352 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 6 23:54:59.109367 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 6 23:54:59.109382 kernel: efifb: scrolling: redraw Jul 6 23:54:59.109398 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 6 23:54:59.109412 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:54:59.109430 kernel: fb0: EFI VGA frame buffer device Jul 6 23:54:59.109442 kernel: pstore: Using crash dump compression: deflate Jul 6 23:54:59.109455 kernel: pstore: Registered efi_pstore as persistent store backend Jul 6 23:54:59.109468 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:54:59.109484 kernel: Segment Routing with IPv6 Jul 6 23:54:59.109498 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:54:59.109511 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:54:59.109527 kernel: Key type dns_resolver registered Jul 6 23:54:59.109541 kernel: IPI shorthand broadcast: enabled Jul 6 23:54:59.109562 kernel: sched_clock: Marking stable (891003600, 49823300)->(1156273200, -215446300) Jul 6 23:54:59.109582 kernel: registered taskstats version 1 Jul 6 23:54:59.109598 kernel: Loading compiled-in X.509 certificates Jul 6 23:54:59.109614 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:54:59.109629 kernel: Key type .fscrypt registered Jul 6 23:54:59.109645 kernel: Key type fscrypt-provisioning registered Jul 6 23:54:59.109661 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:54:59.109677 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:54:59.109690 kernel: ima: No architecture policies found Jul 6 23:54:59.109710 kernel: clk: Disabling unused clocks Jul 6 23:54:59.109725 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:54:59.109739 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:54:59.109752 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:54:59.109765 kernel: Run /init as init process Jul 6 23:54:59.109780 kernel: with arguments: Jul 6 23:54:59.109794 kernel: /init Jul 6 23:54:59.109806 kernel: with environment: Jul 6 23:54:59.109819 kernel: HOME=/ Jul 6 23:54:59.109834 kernel: TERM=linux Jul 6 23:54:59.109847 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:54:59.109859 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:54:59.109871 systemd[1]: Detected virtualization microsoft. Jul 6 23:54:59.109883 systemd[1]: Detected architecture x86-64. Jul 6 23:54:59.109893 systemd[1]: Running in initrd. Jul 6 23:54:59.109903 systemd[1]: No hostname configured, using default hostname. Jul 6 23:54:59.109911 systemd[1]: Hostname set to . Jul 6 23:54:59.109925 systemd[1]: Initializing machine ID from random generator. Jul 6 23:54:59.109933 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:54:59.109944 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:54:59.109954 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:54:59.109963 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:54:59.109973 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:54:59.109983 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:54:59.109992 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:54:59.110007 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:54:59.110016 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:54:59.110024 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:54:59.110033 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:54:59.110041 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:54:59.110050 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:54:59.110062 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:54:59.110073 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:54:59.110081 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:54:59.110090 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:54:59.110099 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:54:59.110110 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:54:59.110118 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:54:59.110127 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:54:59.110139 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:54:59.110149 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:54:59.110161 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:54:59.110170 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:54:59.110180 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:54:59.110190 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:54:59.110198 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:54:59.110207 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:54:59.110250 systemd-journald[176]: Collecting audit messages is disabled. Jul 6 23:54:59.110277 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:54:59.110289 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:54:59.110298 systemd-journald[176]: Journal started Jul 6 23:54:59.110322 systemd-journald[176]: Runtime Journal (/run/log/journal/6298148220d945d0b2ccb82d59aade98) is 8.0M, max 158.8M, 150.8M free. Jul 6 23:54:59.104532 systemd-modules-load[177]: Inserted module 'overlay' Jul 6 23:54:59.123247 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:54:59.126806 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:54:59.133931 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:54:59.151445 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:54:59.165124 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:54:59.165158 kernel: Bridge firewalling registered Jul 6 23:54:59.165055 systemd-modules-load[177]: Inserted module 'br_netfilter' Jul 6 23:54:59.167498 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:54:59.177882 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:54:59.184446 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:54:59.187906 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:54:59.196752 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:54:59.211384 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:54:59.220375 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:54:59.225593 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:54:59.239174 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:54:59.242724 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:54:59.252541 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:54:59.269169 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:54:59.283405 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:54:59.301518 systemd-resolved[203]: Positive Trust Anchors: Jul 6 23:54:59.308911 dracut-cmdline[213]: dracut-dracut-053 Jul 6 23:54:59.308911 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:54:59.301535 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:54:59.301589 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:54:59.305365 systemd-resolved[203]: Defaulting to hostname 'linux'. Jul 6 23:54:59.311351 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:54:59.314564 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:54:59.379239 kernel: SCSI subsystem initialized Jul 6 23:54:59.389237 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:54:59.401244 kernel: iscsi: registered transport (tcp) Jul 6 23:54:59.422283 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:54:59.422367 kernel: QLogic iSCSI HBA Driver Jul 6 23:54:59.458680 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:54:59.467388 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:54:59.496760 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:54:59.496862 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:54:59.500445 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:54:59.541246 kernel: raid6: avx512x4 gen() 18750 MB/s Jul 6 23:54:59.560233 kernel: raid6: avx512x2 gen() 18676 MB/s Jul 6 23:54:59.579229 kernel: raid6: avx512x1 gen() 18747 MB/s Jul 6 23:54:59.598234 kernel: raid6: avx2x4 gen() 18544 MB/s Jul 6 23:54:59.617228 kernel: raid6: avx2x2 gen() 18650 MB/s Jul 6 23:54:59.637117 kernel: raid6: avx2x1 gen() 14185 MB/s Jul 6 23:54:59.637159 kernel: raid6: using algorithm avx512x4 gen() 18750 MB/s Jul 6 23:54:59.659071 kernel: raid6: .... xor() 8283 MB/s, rmw enabled Jul 6 23:54:59.659103 kernel: raid6: using avx512x2 recovery algorithm Jul 6 23:54:59.681239 kernel: xor: automatically using best checksumming function avx Jul 6 23:54:59.827245 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:54:59.837245 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:54:59.846398 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:54:59.859584 systemd-udevd[395]: Using default interface naming scheme 'v255'. Jul 6 23:54:59.864038 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:54:59.874470 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:54:59.897394 dracut-pre-trigger[397]: rd.md=0: removing MD RAID activation Jul 6 23:54:59.927552 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:54:59.935467 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:54:59.978321 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:54:59.989386 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:55:00.021797 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:55:00.028588 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:55:00.035444 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:55:00.041848 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:55:00.051365 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:55:00.066340 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:55:00.079679 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:55:00.087745 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:55:00.087775 kernel: AES CTR mode by8 optimization enabled Jul 6 23:55:00.115238 kernel: hv_vmbus: Vmbus version:5.2 Jul 6 23:55:00.136797 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:55:00.140337 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:00.147366 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:55:00.160480 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:55:00.170103 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 6 23:55:00.170134 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 6 23:55:00.160817 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:00.180855 kernel: PTP clock support registered Jul 6 23:55:00.177961 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:00.189720 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:00.197306 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 6 23:55:00.200456 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:55:00.200579 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:00.221261 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 6 23:55:00.223370 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:00.237245 kernel: hv_vmbus: registering driver hv_netvsc Jul 6 23:55:00.240245 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:55:00.246990 kernel: hv_utils: Registering HyperV Utility Driver Jul 6 23:55:00.247027 kernel: hv_vmbus: registering driver hv_utils Jul 6 23:55:00.247042 kernel: hv_vmbus: registering driver hv_storvsc Jul 6 23:55:00.253004 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:00.259365 kernel: hv_utils: Heartbeat IC version 3.0 Jul 6 23:55:00.259397 kernel: hv_utils: Shutdown IC version 3.2 Jul 6 23:55:00.259416 kernel: scsi host0: storvsc_host_t Jul 6 23:55:00.270441 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 6 23:55:00.270541 kernel: hv_utils: TimeSync IC version 4.0 Jul 6 23:55:00.270567 kernel: scsi host1: storvsc_host_t Jul 6 23:55:01.594894 systemd-resolved[203]: Clock change detected. Flushing caches. Jul 6 23:55:01.603401 kernel: hv_vmbus: registering driver hid_hyperv Jul 6 23:55:01.603446 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 6 23:55:01.603490 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 6 23:55:01.604193 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:55:01.621161 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 6 23:55:01.648519 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:01.660873 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 6 23:55:01.661275 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:55:01.665132 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 6 23:55:01.667653 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 6 23:55:01.667807 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 6 23:55:01.667944 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 6 23:55:01.668095 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 6 23:55:01.670039 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 6 23:55:01.679706 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:01.679768 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 6 23:55:01.823232 kernel: hv_netvsc 7ced8d4a-4564-7ced-8d4a-45647ced8d4a eth0: VF slot 1 added Jul 6 23:55:01.832144 kernel: hv_vmbus: registering driver hv_pci Jul 6 23:55:01.838058 kernel: hv_pci 72ddc379-ba26-4c04-ac12-5393d8f0638b: PCI VMBus probing: Using version 0x10004 Jul 6 23:55:01.843037 kernel: hv_pci 72ddc379-ba26-4c04-ac12-5393d8f0638b: PCI host bridge to bus ba26:00 Jul 6 23:55:01.843216 kernel: pci_bus ba26:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jul 6 23:55:01.848894 kernel: pci_bus ba26:00: No busn resource found for root bus, will use [bus 00-ff] Jul 6 23:55:01.854163 kernel: pci ba26:00:02.0: [15b3:1016] type 00 class 0x020000 Jul 6 23:55:01.859087 kernel: pci ba26:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 6 23:55:01.863193 kernel: pci ba26:00:02.0: enabling Extended Tags Jul 6 23:55:01.874292 kernel: pci ba26:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ba26:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 6 23:55:01.880657 kernel: pci_bus ba26:00: busn_res: [bus 00-ff] end is updated to 00 Jul 6 23:55:01.880999 kernel: pci ba26:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 6 23:55:02.054512 kernel: mlx5_core ba26:00:02.0: enabling device (0000 -> 0002) Jul 6 23:55:02.060054 kernel: mlx5_core ba26:00:02.0: firmware version: 14.30.5000 Jul 6 23:55:02.170510 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 6 23:55:02.248046 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (458) Jul 6 23:55:02.264357 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:55:02.276253 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (449) Jul 6 23:55:02.282544 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 6 23:55:02.291198 kernel: hv_netvsc 7ced8d4a-4564-7ced-8d4a-45647ced8d4a eth0: VF registering: eth1 Jul 6 23:55:02.294449 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 6 23:55:02.299500 kernel: mlx5_core ba26:00:02.0 eth1: joined to eth0 Jul 6 23:55:02.299736 kernel: mlx5_core ba26:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 6 23:55:02.306542 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 6 23:55:02.322480 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:55:02.326145 kernel: mlx5_core ba26:00:02.0 enP47654s1: renamed from eth1 Jul 6 23:55:03.346960 disk-uuid[604]: The operation has completed successfully. Jul 6 23:55:03.350399 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:03.425736 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:55:03.425855 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:55:03.451180 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:55:03.457658 sh[693]: Success Jul 6 23:55:03.488263 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 6 23:55:03.693565 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:55:03.708141 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:55:03.710854 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:55:03.747496 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:55:03.747570 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:03.751285 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:55:03.754345 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:55:03.757179 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:55:04.025384 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:55:04.031235 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:55:04.047209 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:55:04.054145 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:55:04.066043 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:04.066084 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:04.071169 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:04.104552 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:55:04.114226 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:55:04.120948 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:04.126189 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:55:04.139181 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:55:04.168389 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:55:04.178248 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:55:04.198092 systemd-networkd[877]: lo: Link UP Jul 6 23:55:04.198102 systemd-networkd[877]: lo: Gained carrier Jul 6 23:55:04.200392 systemd-networkd[877]: Enumeration completed Jul 6 23:55:04.200678 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:55:04.203715 systemd[1]: Reached target network.target - Network. Jul 6 23:55:04.205144 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:04.205149 systemd-networkd[877]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:55:04.266043 kernel: mlx5_core ba26:00:02.0 enP47654s1: Link up Jul 6 23:55:04.295053 kernel: hv_netvsc 7ced8d4a-4564-7ced-8d4a-45647ced8d4a eth0: Data path switched to VF: enP47654s1 Jul 6 23:55:04.295933 systemd-networkd[877]: enP47654s1: Link UP Jul 6 23:55:04.296082 systemd-networkd[877]: eth0: Link UP Jul 6 23:55:04.296292 systemd-networkd[877]: eth0: Gained carrier Jul 6 23:55:04.296305 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:04.308073 systemd-networkd[877]: enP47654s1: Gained carrier Jul 6 23:55:04.347102 systemd-networkd[877]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 6 23:55:04.862896 ignition[828]: Ignition 2.19.0 Jul 6 23:55:04.862908 ignition[828]: Stage: fetch-offline Jul 6 23:55:04.862950 ignition[828]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:04.867590 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:55:04.862961 ignition[828]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:04.863092 ignition[828]: parsed url from cmdline: "" Jul 6 23:55:04.863097 ignition[828]: no config URL provided Jul 6 23:55:04.863104 ignition[828]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:55:04.863114 ignition[828]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:55:04.863121 ignition[828]: failed to fetch config: resource requires networking Jul 6 23:55:04.884332 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:55:04.864932 ignition[828]: Ignition finished successfully Jul 6 23:55:04.903984 ignition[886]: Ignition 2.19.0 Jul 6 23:55:04.903995 ignition[886]: Stage: fetch Jul 6 23:55:04.904218 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:04.904231 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:04.904323 ignition[886]: parsed url from cmdline: "" Jul 6 23:55:04.904328 ignition[886]: no config URL provided Jul 6 23:55:04.904335 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:55:04.904343 ignition[886]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:55:04.904363 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 6 23:55:04.999391 ignition[886]: GET result: OK Jul 6 23:55:04.999500 ignition[886]: config has been read from IMDS userdata Jul 6 23:55:04.999541 ignition[886]: parsing config with SHA512: 8afefc7ad14204c829da7a39b2b3286d5724a2cc1ba7bd5da298c7f4fd80dae8a059264e4ab169514585e4797ce43c69bf272fd6e5a637cbc3c11b3270a2602c Jul 6 23:55:05.004384 unknown[886]: fetched base config from "system" Jul 6 23:55:05.004397 unknown[886]: fetched base config from "system" Jul 6 23:55:05.004938 ignition[886]: fetch: fetch complete Jul 6 23:55:05.004406 unknown[886]: fetched user config from "azure" Jul 6 23:55:05.004944 ignition[886]: fetch: fetch passed Jul 6 23:55:05.006962 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:55:05.004992 ignition[886]: Ignition finished successfully Jul 6 23:55:05.024255 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:55:05.041874 ignition[892]: Ignition 2.19.0 Jul 6 23:55:05.041885 ignition[892]: Stage: kargs Jul 6 23:55:05.042115 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:05.042128 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:05.042993 ignition[892]: kargs: kargs passed Jul 6 23:55:05.043051 ignition[892]: Ignition finished successfully Jul 6 23:55:05.054650 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:55:05.064178 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:55:05.082610 ignition[898]: Ignition 2.19.0 Jul 6 23:55:05.082620 ignition[898]: Stage: disks Jul 6 23:55:05.084607 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:55:05.082854 ignition[898]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:05.088331 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:55:05.082873 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:05.092224 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:55:05.083753 ignition[898]: disks: disks passed Jul 6 23:55:05.095607 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:55:05.083795 ignition[898]: Ignition finished successfully Jul 6 23:55:05.101431 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:55:05.106644 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:55:05.122791 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:55:05.188648 systemd-fsck[906]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 6 23:55:05.193213 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:55:05.203252 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:55:05.298040 kernel: EXT4-fs (sda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:55:05.298453 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:55:05.303473 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:55:05.342152 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:55:05.346806 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:55:05.354212 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:55:05.361037 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (917) Jul 6 23:55:05.366342 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:55:05.367448 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:55:05.379040 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:05.379075 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:05.379088 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:05.384037 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:55:05.394694 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:55:05.401307 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:55:05.406170 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:55:05.941699 coreos-metadata[919]: Jul 06 23:55:05.941 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:55:05.948195 coreos-metadata[919]: Jul 06 23:55:05.948 INFO Fetch successful Jul 6 23:55:05.951070 coreos-metadata[919]: Jul 06 23:55:05.948 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:55:05.958632 coreos-metadata[919]: Jul 06 23:55:05.958 INFO Fetch successful Jul 6 23:55:05.976877 coreos-metadata[919]: Jul 06 23:55:05.974 INFO wrote hostname ci-4081.3.4-a-2f8c6d8615 to /sysroot/etc/hostname Jul 6 23:55:05.979283 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:55:05.995508 initrd-setup-root[946]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:55:06.052661 initrd-setup-root[953]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:55:06.075518 initrd-setup-root[960]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:55:06.084752 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:55:06.115191 systemd-networkd[877]: eth0: Gained IPv6LL Jul 6 23:55:06.179271 systemd-networkd[877]: enP47654s1: Gained IPv6LL Jul 6 23:55:06.948936 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:55:06.958247 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:55:06.970187 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:55:06.975879 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:06.976837 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:55:07.000044 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:55:07.009814 ignition[1034]: INFO : Ignition 2.19.0 Jul 6 23:55:07.009814 ignition[1034]: INFO : Stage: mount Jul 6 23:55:07.017143 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:07.017143 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:07.017143 ignition[1034]: INFO : mount: mount passed Jul 6 23:55:07.017143 ignition[1034]: INFO : Ignition finished successfully Jul 6 23:55:07.011873 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:55:07.028049 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:55:07.044244 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:55:07.056050 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1047) Jul 6 23:55:07.063222 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:07.063291 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:07.065891 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:07.071421 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:55:07.072867 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:55:07.105632 ignition[1064]: INFO : Ignition 2.19.0 Jul 6 23:55:07.108023 ignition[1064]: INFO : Stage: files Jul 6 23:55:07.108023 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:07.108023 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:07.115732 ignition[1064]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:55:07.125211 ignition[1064]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:55:07.128794 ignition[1064]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:55:07.203394 ignition[1064]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:55:07.208242 ignition[1064]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:55:07.208242 ignition[1064]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:55:07.203940 unknown[1064]: wrote ssh authorized keys file for user: core Jul 6 23:55:07.218959 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 6 23:55:07.224461 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 6 23:55:07.515490 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:55:07.864917 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 6 23:55:07.864917 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:55:07.876128 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 6 23:55:08.672545 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:55:08.995485 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:55:08.995485 ignition[1064]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:55:09.010974 ignition[1064]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:55:09.019211 ignition[1064]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:55:09.019211 ignition[1064]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:55:09.019211 ignition[1064]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:55:09.019211 ignition[1064]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:55:09.019211 ignition[1064]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:55:09.019211 ignition[1064]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:55:09.019211 ignition[1064]: INFO : files: files passed Jul 6 23:55:09.019211 ignition[1064]: INFO : Ignition finished successfully Jul 6 23:55:09.012952 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:55:09.040294 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:55:09.062283 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:55:09.069207 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:55:09.069328 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:55:09.097578 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:09.097578 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:09.108176 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:09.114050 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:55:09.115381 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:55:09.128293 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:55:09.174541 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:55:09.174657 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:55:09.181200 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:55:09.186872 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:55:09.190067 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:55:09.200311 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:55:09.213494 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:55:09.222208 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:55:09.234731 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:55:09.241689 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:55:09.251274 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:55:09.253904 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:55:09.254046 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:55:09.260434 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:55:09.265137 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:55:09.270631 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:55:09.276014 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:55:09.281627 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:55:09.290500 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:55:09.298597 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:55:09.299740 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:55:09.300645 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:55:09.301096 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:55:09.301509 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:55:09.301660 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:55:09.302466 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:55:09.302936 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:55:09.303350 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:55:09.318198 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:55:09.324963 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:55:09.325139 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:55:09.331107 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:55:09.395854 ignition[1117]: INFO : Ignition 2.19.0 Jul 6 23:55:09.395854 ignition[1117]: INFO : Stage: umount Jul 6 23:55:09.395854 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:09.395854 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:55:09.395854 ignition[1117]: INFO : umount: umount passed Jul 6 23:55:09.395854 ignition[1117]: INFO : Ignition finished successfully Jul 6 23:55:09.331267 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:55:09.341083 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:55:09.341212 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:55:09.346626 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:55:09.346766 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:55:09.368131 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:55:09.371535 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:55:09.371752 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:55:09.377887 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:55:09.381431 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:55:09.381623 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:55:09.385086 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:55:09.385239 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:55:09.393107 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:55:09.393198 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:55:09.397464 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:55:09.397556 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:55:09.399428 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:55:09.399475 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:55:09.407611 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:55:09.407661 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:55:09.409081 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:55:09.409136 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:55:09.409456 systemd[1]: Stopped target network.target - Network. Jul 6 23:55:09.422451 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:55:09.422520 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:55:09.425996 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:55:09.435123 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:55:09.437515 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:55:09.441027 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:55:09.443517 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:55:09.451432 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:55:09.451490 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:55:09.460324 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:55:09.460387 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:55:09.465691 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:55:09.468815 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:55:09.545169 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:55:09.545259 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:55:09.553783 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:55:09.555852 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:55:09.558132 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:55:09.569672 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:55:09.569799 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:55:09.570325 systemd-networkd[877]: eth0: DHCPv6 lease lost Jul 6 23:55:09.577230 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:55:09.577377 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:55:09.581939 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:55:09.582007 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:55:09.603172 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:55:09.608561 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:55:09.608645 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:55:09.618258 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:55:09.618329 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:55:09.623848 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:55:09.623900 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:55:09.626128 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:55:09.626172 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:55:09.632642 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:55:09.656384 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:55:09.659048 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:55:09.666686 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:55:09.666768 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:55:09.675253 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:55:09.675308 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:55:09.676295 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:55:09.676343 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:55:09.703445 kernel: hv_netvsc 7ced8d4a-4564-7ced-8d4a-45647ced8d4a eth0: Data path switched from VF: enP47654s1 Jul 6 23:55:09.677241 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:55:09.677280 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:55:09.678108 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:55:09.678148 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:09.696302 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:55:09.706506 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:55:09.706599 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:55:09.710291 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:55:09.710344 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:09.724943 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:55:09.725062 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:55:09.749772 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:55:09.749911 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:55:10.104872 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:55:10.105006 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:55:10.112559 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:55:10.118175 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:55:10.118247 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:55:10.131201 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:55:10.138914 systemd[1]: Switching root. Jul 6 23:55:10.224750 systemd-journald[176]: Journal stopped Jul 6 23:55:14.702800 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Jul 6 23:55:14.702850 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:55:14.702869 kernel: SELinux: policy capability open_perms=1 Jul 6 23:55:14.702883 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:55:14.702896 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:55:14.702910 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:55:14.702926 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:55:14.702944 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:55:14.702958 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:55:14.702973 kernel: audit: type=1403 audit(1751846111.499:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:55:14.702988 systemd[1]: Successfully loaded SELinux policy in 133.496ms. Jul 6 23:55:14.703006 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.707ms. Jul 6 23:55:14.703126 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:55:14.703144 systemd[1]: Detected virtualization microsoft. Jul 6 23:55:14.703165 systemd[1]: Detected architecture x86-64. Jul 6 23:55:14.703182 systemd[1]: Detected first boot. Jul 6 23:55:14.703199 systemd[1]: Hostname set to . Jul 6 23:55:14.703216 systemd[1]: Initializing machine ID from random generator. Jul 6 23:55:14.703232 zram_generator::config[1161]: No configuration found. Jul 6 23:55:14.703253 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:55:14.703269 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:55:14.703286 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:55:14.703302 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:55:14.703320 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:55:14.703337 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:55:14.703354 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:55:14.703374 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:55:14.703392 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:55:14.703408 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:55:14.703425 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:55:14.703442 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:55:14.703459 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:55:14.703476 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:55:14.703493 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:55:14.703513 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:55:14.703530 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:55:14.703547 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:55:14.703564 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:55:14.703580 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:55:14.703598 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:55:14.703620 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:55:14.703638 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:55:14.703659 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:55:14.703676 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:55:14.703694 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:55:14.703711 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:55:14.703729 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:55:14.703746 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:55:14.703763 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:55:14.703784 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:55:14.703801 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:55:14.703819 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:55:14.703845 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:55:14.703864 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:55:14.703884 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:55:14.705123 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:55:14.705151 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:14.705171 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:55:14.705188 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:55:14.705206 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:55:14.705225 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:55:14.705244 systemd[1]: Reached target machines.target - Containers. Jul 6 23:55:14.705268 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:55:14.705292 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:55:14.705316 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:55:14.705335 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:55:14.705353 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:55:14.705370 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:55:14.705389 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:55:14.705406 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:55:14.705423 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:55:14.705444 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:55:14.705461 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:55:14.705477 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:55:14.705494 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:55:14.705510 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:55:14.705527 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:55:14.705543 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:55:14.705560 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:55:14.705579 kernel: loop: module loaded Jul 6 23:55:14.705594 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:55:14.705611 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:55:14.705627 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:55:14.705643 systemd[1]: Stopped verity-setup.service. Jul 6 23:55:14.705660 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:14.705709 systemd-journald[1253]: Collecting audit messages is disabled. Jul 6 23:55:14.705747 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:55:14.705764 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:55:14.705780 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:55:14.705798 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:55:14.705814 kernel: ACPI: bus type drm_connector registered Jul 6 23:55:14.705829 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:55:14.705849 systemd-journald[1253]: Journal started Jul 6 23:55:14.705883 systemd-journald[1253]: Runtime Journal (/run/log/journal/c7cd378053b740fa8301ea23258d3c0e) is 8.0M, max 158.8M, 150.8M free. Jul 6 23:55:13.908236 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:55:14.046648 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 6 23:55:14.047034 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:55:14.715073 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:55:14.715115 kernel: fuse: init (API version 7.39) Jul 6 23:55:14.720784 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:55:14.723713 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:55:14.727523 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:55:14.731218 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:55:14.731405 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:55:14.735407 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:55:14.735601 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:55:14.739346 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:55:14.739537 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:55:14.742877 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:55:14.743164 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:55:14.746957 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:55:14.747397 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:55:14.751143 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:55:14.751292 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:55:14.754964 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:55:14.759514 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:55:14.764480 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:55:14.787501 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:55:14.798104 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:55:14.803677 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:55:14.806873 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:55:14.807038 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:55:14.810914 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 6 23:55:14.814994 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:55:14.821195 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:55:14.826210 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:55:14.840862 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:55:14.849149 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:55:14.852555 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:55:14.857506 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:55:14.860761 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:55:14.861696 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:55:14.869153 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:55:14.879265 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:55:14.888056 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:55:14.894421 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:55:14.897950 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:55:14.901900 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:55:14.905609 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:55:14.916482 systemd-journald[1253]: Time spent on flushing to /var/log/journal/c7cd378053b740fa8301ea23258d3c0e is 34.169ms for 957 entries. Jul 6 23:55:14.916482 systemd-journald[1253]: System Journal (/var/log/journal/c7cd378053b740fa8301ea23258d3c0e) is 8.0M, max 2.6G, 2.6G free. Jul 6 23:55:15.018470 systemd-journald[1253]: Received client request to flush runtime journal. Jul 6 23:55:15.018529 kernel: loop0: detected capacity change from 0 to 224512 Jul 6 23:55:15.018552 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:55:14.915646 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:55:14.934347 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 6 23:55:14.949208 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:55:14.977981 udevadm[1308]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 6 23:55:14.989145 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:55:15.019799 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:55:15.047126 kernel: loop1: detected capacity change from 0 to 31056 Jul 6 23:55:15.061530 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:55:15.062199 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 6 23:55:15.084676 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:55:15.095196 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:55:15.211249 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Jul 6 23:55:15.211276 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Jul 6 23:55:15.217510 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:55:15.390064 kernel: loop2: detected capacity change from 0 to 140768 Jul 6 23:55:15.729050 kernel: loop3: detected capacity change from 0 to 142488 Jul 6 23:55:16.121063 kernel: loop4: detected capacity change from 0 to 224512 Jul 6 23:55:16.133278 kernel: loop5: detected capacity change from 0 to 31056 Jul 6 23:55:16.144092 kernel: loop6: detected capacity change from 0 to 140768 Jul 6 23:55:16.152349 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:55:16.160047 kernel: loop7: detected capacity change from 0 to 142488 Jul 6 23:55:16.162431 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:55:16.173056 (sd-merge)[1322]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 6 23:55:16.173733 (sd-merge)[1322]: Merged extensions into '/usr'. Jul 6 23:55:16.179980 systemd[1]: Reloading requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:55:16.180089 systemd[1]: Reloading... Jul 6 23:55:16.201265 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Jul 6 23:55:16.250049 zram_generator::config[1349]: No configuration found. Jul 6 23:55:16.403727 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:16.461716 systemd[1]: Reloading finished in 281 ms. Jul 6 23:55:16.494286 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:55:16.505226 systemd[1]: Starting ensure-sysext.service... Jul 6 23:55:16.508667 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:55:16.521609 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:55:16.537211 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:55:16.554281 systemd[1]: Reloading requested from client PID 1408 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:55:16.554303 systemd[1]: Reloading... Jul 6 23:55:16.598609 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:55:16.599342 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:55:16.605742 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:55:16.606446 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Jul 6 23:55:16.606543 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Jul 6 23:55:16.631996 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:55:16.632012 systemd-tmpfiles[1409]: Skipping /boot Jul 6 23:55:16.691791 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:55:16.691812 systemd-tmpfiles[1409]: Skipping /boot Jul 6 23:55:16.727052 zram_generator::config[1461]: No configuration found. Jul 6 23:55:16.858108 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:55:16.862082 kernel: hv_vmbus: registering driver hyperv_fb Jul 6 23:55:16.869095 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 6 23:55:16.878123 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 6 23:55:16.890093 kernel: Console: switching to colour dummy device 80x25 Jul 6 23:55:16.897051 kernel: hv_vmbus: registering driver hv_balloon Jul 6 23:55:16.897148 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:55:16.903960 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 6 23:55:17.071906 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1422) Jul 6 23:55:17.128403 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:17.259948 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:55:17.260853 systemd[1]: Reloading finished in 706 ms. Jul 6 23:55:17.354286 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:55:17.413552 systemd[1]: Finished ensure-sysext.service. Jul 6 23:55:17.466780 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jul 6 23:55:17.463585 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:55:17.467496 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:17.482197 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:55:17.487686 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:55:17.491330 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:55:17.493226 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:55:17.500527 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:55:17.507546 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:55:17.512000 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:55:17.517214 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:55:17.519501 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:55:17.533264 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:55:17.545254 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:55:17.551197 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:55:17.566267 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:55:17.573188 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:55:17.578205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:17.583276 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:17.584938 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:55:17.591488 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:55:17.591704 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:55:17.595541 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:55:17.596180 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:55:17.608502 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:55:17.608699 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:55:17.617712 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:55:17.617911 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:55:17.621813 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:55:17.641336 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:55:17.645192 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:55:17.645455 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:55:17.647057 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:55:17.661365 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:55:17.713419 augenrules[1617]: No rules Jul 6 23:55:17.718046 lvm[1604]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:55:17.717948 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:55:17.755807 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:55:17.766589 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:55:17.770471 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:55:17.779285 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:55:17.791808 lvm[1625]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:55:17.809936 systemd-resolved[1588]: Positive Trust Anchors: Jul 6 23:55:17.809960 systemd-resolved[1588]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:55:17.810065 systemd-resolved[1588]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:55:17.816818 systemd-networkd[1418]: lo: Link UP Jul 6 23:55:17.816827 systemd-networkd[1418]: lo: Gained carrier Jul 6 23:55:17.819411 systemd-networkd[1418]: Enumeration completed Jul 6 23:55:17.819620 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:55:17.819820 systemd-networkd[1418]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:17.819823 systemd-networkd[1418]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:55:17.829094 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:55:17.832868 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:55:17.845350 systemd-resolved[1588]: Using system hostname 'ci-4081.3.4-a-2f8c6d8615'. Jul 6 23:55:17.880046 kernel: mlx5_core ba26:00:02.0 enP47654s1: Link up Jul 6 23:55:17.899086 kernel: hv_netvsc 7ced8d4a-4564-7ced-8d4a-45647ced8d4a eth0: Data path switched to VF: enP47654s1 Jul 6 23:55:17.902388 systemd-networkd[1418]: enP47654s1: Link UP Jul 6 23:55:17.902390 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:55:17.902530 systemd-networkd[1418]: eth0: Link UP Jul 6 23:55:17.902534 systemd-networkd[1418]: eth0: Gained carrier Jul 6 23:55:17.902551 systemd-networkd[1418]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:17.903674 systemd[1]: Reached target network.target - Network. Jul 6 23:55:17.904008 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:55:17.912934 systemd-networkd[1418]: enP47654s1: Gained carrier Jul 6 23:55:17.938094 systemd-networkd[1418]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 6 23:55:17.995713 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:55:17.999897 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:55:18.295715 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:19.619295 systemd-networkd[1418]: enP47654s1: Gained IPv6LL Jul 6 23:55:19.683311 systemd-networkd[1418]: eth0: Gained IPv6LL Jul 6 23:55:19.686423 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:55:19.690446 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:55:20.230226 ldconfig[1292]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:55:20.242751 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:55:20.252271 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:55:20.263524 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:55:20.267047 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:55:20.270401 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:55:20.273610 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:55:20.277132 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:55:20.280116 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:55:20.283491 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:55:20.286915 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:55:20.286958 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:55:20.289234 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:55:20.292392 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:55:20.296772 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:55:20.316703 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:55:20.320554 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:55:20.323678 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:55:20.326431 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:55:20.328859 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:55:20.328888 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:55:20.336131 systemd[1]: Starting chronyd.service - NTP client/server... Jul 6 23:55:20.342145 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:55:20.351239 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:55:20.359252 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:55:20.372142 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:55:20.377271 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:55:20.380149 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:55:20.384985 jq[1644]: false Jul 6 23:55:20.380202 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jul 6 23:55:20.382216 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 6 23:55:20.387266 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 6 23:55:20.394952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:20.401015 (chronyd)[1640]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 6 23:55:20.401728 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:55:20.406850 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:55:20.415146 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:55:20.423223 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:55:20.431430 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:55:20.438892 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:55:20.442738 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:55:20.443834 KVP[1648]: KVP starting; pid is:1648 Jul 6 23:55:20.443319 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:55:20.446280 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:55:20.457109 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:55:20.470842 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:55:20.471086 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:55:20.475350 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:55:20.475564 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:55:20.488396 chronyd[1670]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 6 23:55:20.502056 kernel: hv_utils: KVP IC version 4.0 Jul 6 23:55:20.494070 KVP[1648]: KVP LIC Version: 3.1 Jul 6 23:55:20.515660 jq[1660]: true Jul 6 23:55:20.540431 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:55:20.542127 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:55:20.545471 chronyd[1670]: Timezone right/UTC failed leap second check, ignoring Jul 6 23:55:20.548054 (ntainerd)[1675]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:55:20.548169 chronyd[1670]: Loaded seccomp filter (level 2) Jul 6 23:55:20.549336 systemd[1]: Started chronyd.service - NTP client/server. Jul 6 23:55:20.570718 jq[1682]: true Jul 6 23:55:20.577789 extend-filesystems[1647]: Found loop4 Jul 6 23:55:20.577789 extend-filesystems[1647]: Found loop5 Jul 6 23:55:20.577789 extend-filesystems[1647]: Found loop6 Jul 6 23:55:20.577789 extend-filesystems[1647]: Found loop7 Jul 6 23:55:20.577789 extend-filesystems[1647]: Found sda Jul 6 23:55:20.577789 extend-filesystems[1647]: Found sda1 Jul 6 23:55:20.577789 extend-filesystems[1647]: Found sda2 Jul 6 23:55:20.577789 extend-filesystems[1647]: Found sda3 Jul 6 23:55:20.577789 extend-filesystems[1647]: Found usr Jul 6 23:55:20.577789 extend-filesystems[1647]: Found sda4 Jul 6 23:55:20.577789 extend-filesystems[1647]: Found sda6 Jul 6 23:55:20.577789 extend-filesystems[1647]: Found sda7 Jul 6 23:55:20.577789 extend-filesystems[1647]: Found sda9 Jul 6 23:55:20.577789 extend-filesystems[1647]: Checking size of /dev/sda9 Jul 6 23:55:20.640189 update_engine[1658]: I20250706 23:55:20.584750 1658 main.cc:92] Flatcar Update Engine starting Jul 6 23:55:20.640189 update_engine[1658]: I20250706 23:55:20.631263 1658 update_check_scheduler.cc:74] Next update check in 6m58s Jul 6 23:55:20.607174 dbus-daemon[1643]: [system] SELinux support is enabled Jul 6 23:55:20.594569 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:55:20.640779 tar[1667]: linux-amd64/LICENSE Jul 6 23:55:20.640779 tar[1667]: linux-amd64/helm Jul 6 23:55:20.607364 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:55:20.628479 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:55:20.628525 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:55:20.656064 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:55:20.656102 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:55:20.659704 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:55:20.667190 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:55:20.681281 extend-filesystems[1647]: Old size kept for /dev/sda9 Jul 6 23:55:20.687240 extend-filesystems[1647]: Found sr0 Jul 6 23:55:20.708772 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:55:20.708999 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:55:20.731320 systemd-logind[1657]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:55:20.738649 systemd-logind[1657]: New seat seat0. Jul 6 23:55:20.740495 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:55:20.838772 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1723) Jul 6 23:55:20.860918 bash[1720]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:55:20.863119 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:55:20.872176 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:55:20.988904 coreos-metadata[1642]: Jul 06 23:55:20.987 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:55:20.997165 coreos-metadata[1642]: Jul 06 23:55:20.996 INFO Fetch successful Jul 6 23:55:20.997165 coreos-metadata[1642]: Jul 06 23:55:20.996 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 6 23:55:21.005949 coreos-metadata[1642]: Jul 06 23:55:21.004 INFO Fetch successful Jul 6 23:55:21.005949 coreos-metadata[1642]: Jul 06 23:55:21.004 INFO Fetching http://168.63.129.16/machine/4a0e12b2-3595-4ec2-b6b2-daae854b6ed3/931ec2bc%2D75ed%2D42cd%2D81a3%2D5acb708a0e9e.%5Fci%2D4081.3.4%2Da%2D2f8c6d8615?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 6 23:55:21.010702 coreos-metadata[1642]: Jul 06 23:55:21.009 INFO Fetch successful Jul 6 23:55:21.010702 coreos-metadata[1642]: Jul 06 23:55:21.010 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:55:21.022877 coreos-metadata[1642]: Jul 06 23:55:21.022 INFO Fetch successful Jul 6 23:55:21.092106 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:55:21.099966 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:55:21.180370 sshd_keygen[1685]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:55:21.192342 locksmithd[1700]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:55:21.241149 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:55:21.256360 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:55:21.269282 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 6 23:55:21.286172 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:55:21.286567 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:55:21.307001 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:55:21.325226 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 6 23:55:21.340813 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:55:21.357328 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:55:21.368762 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:55:21.376378 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:55:21.646833 containerd[1675]: time="2025-07-06T23:55:21.646683800Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 6 23:55:21.711321 tar[1667]: linux-amd64/README.md Jul 6 23:55:21.715133 containerd[1675]: time="2025-07-06T23:55:21.712733100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:21.718338 containerd[1675]: time="2025-07-06T23:55:21.718289100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:21.719242 containerd[1675]: time="2025-07-06T23:55:21.719208000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:55:21.719299 containerd[1675]: time="2025-07-06T23:55:21.719254600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:55:21.719461 containerd[1675]: time="2025-07-06T23:55:21.719435000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:55:21.719525 containerd[1675]: time="2025-07-06T23:55:21.719467600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:21.719575 containerd[1675]: time="2025-07-06T23:55:21.719552900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:21.719612 containerd[1675]: time="2025-07-06T23:55:21.719578000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:21.719834 containerd[1675]: time="2025-07-06T23:55:21.719806000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:21.719884 containerd[1675]: time="2025-07-06T23:55:21.719835500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:21.719884 containerd[1675]: time="2025-07-06T23:55:21.719854800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:21.719884 containerd[1675]: time="2025-07-06T23:55:21.719869000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:21.719987 containerd[1675]: time="2025-07-06T23:55:21.719970700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:21.720265 containerd[1675]: time="2025-07-06T23:55:21.720239200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:21.722541 containerd[1675]: time="2025-07-06T23:55:21.722074900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:21.722541 containerd[1675]: time="2025-07-06T23:55:21.722118700Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:55:21.722541 containerd[1675]: time="2025-07-06T23:55:21.722244600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:55:21.722541 containerd[1675]: time="2025-07-06T23:55:21.722319200Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:55:21.731976 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:55:21.743459 containerd[1675]: time="2025-07-06T23:55:21.743342800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:55:21.743459 containerd[1675]: time="2025-07-06T23:55:21.743421800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:55:21.745099 containerd[1675]: time="2025-07-06T23:55:21.743446400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:55:21.745099 containerd[1675]: time="2025-07-06T23:55:21.743536700Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:55:21.745099 containerd[1675]: time="2025-07-06T23:55:21.743596200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:55:21.745099 containerd[1675]: time="2025-07-06T23:55:21.743802400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:55:21.745099 containerd[1675]: time="2025-07-06T23:55:21.744261000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:55:21.745099 containerd[1675]: time="2025-07-06T23:55:21.744413600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:55:21.745099 containerd[1675]: time="2025-07-06T23:55:21.744434900Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:55:21.745099 containerd[1675]: time="2025-07-06T23:55:21.744452800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:55:21.746964 containerd[1675]: time="2025-07-06T23:55:21.745398600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:55:21.746964 containerd[1675]: time="2025-07-06T23:55:21.745441400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:55:21.746964 containerd[1675]: time="2025-07-06T23:55:21.745460300Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:55:21.746964 containerd[1675]: time="2025-07-06T23:55:21.745484200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:55:21.746964 containerd[1675]: time="2025-07-06T23:55:21.745504500Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:55:21.746964 containerd[1675]: time="2025-07-06T23:55:21.745522400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:55:21.746964 containerd[1675]: time="2025-07-06T23:55:21.745538200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:55:21.746964 containerd[1675]: time="2025-07-06T23:55:21.745558000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:55:21.746964 containerd[1675]: time="2025-07-06T23:55:21.745612600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:55:21.746964 containerd[1675]: time="2025-07-06T23:55:21.745633000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:55:21.746964 containerd[1675]: time="2025-07-06T23:55:21.745649500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:55:21.746964 containerd[1675]: time="2025-07-06T23:55:21.745668000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:55:21.746964 containerd[1675]: time="2025-07-06T23:55:21.745683600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:55:21.746964 containerd[1675]: time="2025-07-06T23:55:21.745702000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:55:21.747519 containerd[1675]: time="2025-07-06T23:55:21.745719000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:55:21.747519 containerd[1675]: time="2025-07-06T23:55:21.745736700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:55:21.747519 containerd[1675]: time="2025-07-06T23:55:21.745754600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:55:21.747519 containerd[1675]: time="2025-07-06T23:55:21.745782500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:55:21.747519 containerd[1675]: time="2025-07-06T23:55:21.745818700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:55:21.747519 containerd[1675]: time="2025-07-06T23:55:21.745835700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:55:21.747519 containerd[1675]: time="2025-07-06T23:55:21.745851800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:55:21.747519 containerd[1675]: time="2025-07-06T23:55:21.745875500Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:55:21.747519 containerd[1675]: time="2025-07-06T23:55:21.745914800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:55:21.747519 containerd[1675]: time="2025-07-06T23:55:21.745932800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:55:21.747519 containerd[1675]: time="2025-07-06T23:55:21.745955400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:55:21.747519 containerd[1675]: time="2025-07-06T23:55:21.746015300Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:55:21.747519 containerd[1675]: time="2025-07-06T23:55:21.746055700Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:55:21.747519 containerd[1675]: time="2025-07-06T23:55:21.746071400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:55:21.747986 containerd[1675]: time="2025-07-06T23:55:21.746087000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:55:21.747986 containerd[1675]: time="2025-07-06T23:55:21.746099500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:55:21.747986 containerd[1675]: time="2025-07-06T23:55:21.746115800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:55:21.747986 containerd[1675]: time="2025-07-06T23:55:21.746130000Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:55:21.747986 containerd[1675]: time="2025-07-06T23:55:21.746143600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:55:21.748894 containerd[1675]: time="2025-07-06T23:55:21.746507900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:55:21.748894 containerd[1675]: time="2025-07-06T23:55:21.746587400Z" level=info msg="Connect containerd service" Jul 6 23:55:21.748894 containerd[1675]: time="2025-07-06T23:55:21.746639300Z" level=info msg="using legacy CRI server" Jul 6 23:55:21.748894 containerd[1675]: time="2025-07-06T23:55:21.746650600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:55:21.748894 containerd[1675]: time="2025-07-06T23:55:21.746828100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:55:21.749847 containerd[1675]: time="2025-07-06T23:55:21.749638000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:55:21.751052 containerd[1675]: time="2025-07-06T23:55:21.750888800Z" level=info msg="Start subscribing containerd event" Jul 6 23:55:21.751417 containerd[1675]: time="2025-07-06T23:55:21.751398100Z" level=info msg="Start recovering state" Jul 6 23:55:21.751672 containerd[1675]: time="2025-07-06T23:55:21.751622400Z" level=info msg="Start event monitor" Jul 6 23:55:21.754066 containerd[1675]: time="2025-07-06T23:55:21.751674600Z" level=info msg="Start snapshots syncer" Jul 6 23:55:21.754066 containerd[1675]: time="2025-07-06T23:55:21.751690900Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:55:21.754066 containerd[1675]: time="2025-07-06T23:55:21.751701600Z" level=info msg="Start streaming server" Jul 6 23:55:21.754066 containerd[1675]: time="2025-07-06T23:55:21.751634300Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:55:21.754066 containerd[1675]: time="2025-07-06T23:55:21.751901500Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:55:21.752070 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:55:21.755242 containerd[1675]: time="2025-07-06T23:55:21.755213200Z" level=info msg="containerd successfully booted in 0.109593s" Jul 6 23:55:22.098337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:22.102789 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:55:22.106696 systemd[1]: Startup finished in 1.012s (firmware) + 25.619s (loader) + 1.034s (kernel) + 11.339s (initrd) + 10.739s (userspace) = 49.745s. Jul 6 23:55:22.118287 (kubelet)[1803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:55:22.453118 login[1785]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 6 23:55:22.458765 login[1786]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 6 23:55:22.469908 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:55:22.470130 systemd-logind[1657]: New session 1 of user core. Jul 6 23:55:22.478264 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:55:22.482658 systemd-logind[1657]: New session 2 of user core. Jul 6 23:55:22.511568 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:55:22.520464 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:55:22.528706 (systemd)[1814]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:55:22.711099 kubelet[1803]: E0706 23:55:22.708888 1803 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:55:22.712257 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:55:22.712445 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:55:22.712962 systemd[1]: kubelet.service: Consumed 1.012s CPU time. Jul 6 23:55:22.742329 systemd[1814]: Queued start job for default target default.target. Jul 6 23:55:22.749009 systemd[1814]: Created slice app.slice - User Application Slice. Jul 6 23:55:22.749194 systemd[1814]: Reached target paths.target - Paths. Jul 6 23:55:22.749290 systemd[1814]: Reached target timers.target - Timers. Jul 6 23:55:22.751156 systemd[1814]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:55:22.771447 systemd[1814]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:55:22.771601 systemd[1814]: Reached target sockets.target - Sockets. Jul 6 23:55:22.771623 systemd[1814]: Reached target basic.target - Basic System. Jul 6 23:55:22.771741 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:55:22.772083 systemd[1814]: Reached target default.target - Main User Target. Jul 6 23:55:22.772132 systemd[1814]: Startup finished in 230ms. Jul 6 23:55:22.777208 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:55:22.778221 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:55:23.064277 waagent[1783]: 2025-07-06T23:55:23.064108Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jul 6 23:55:23.103356 waagent[1783]: 2025-07-06T23:55:23.065733Z INFO Daemon Daemon OS: flatcar 4081.3.4 Jul 6 23:55:23.103356 waagent[1783]: 2025-07-06T23:55:23.066768Z INFO Daemon Daemon Python: 3.11.9 Jul 6 23:55:23.103356 waagent[1783]: 2025-07-06T23:55:23.068152Z INFO Daemon Daemon Run daemon Jul 6 23:55:23.103356 waagent[1783]: 2025-07-06T23:55:23.068971Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.4' Jul 6 23:55:23.103356 waagent[1783]: 2025-07-06T23:55:23.069814Z INFO Daemon Daemon Using waagent for provisioning Jul 6 23:55:23.103356 waagent[1783]: 2025-07-06T23:55:23.071042Z INFO Daemon Daemon Activate resource disk Jul 6 23:55:23.103356 waagent[1783]: 2025-07-06T23:55:23.071868Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 6 23:55:23.103356 waagent[1783]: 2025-07-06T23:55:23.076477Z INFO Daemon Daemon Found device: None Jul 6 23:55:23.103356 waagent[1783]: 2025-07-06T23:55:23.077383Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 6 23:55:23.103356 waagent[1783]: 2025-07-06T23:55:23.078431Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 6 23:55:23.103356 waagent[1783]: 2025-07-06T23:55:23.080535Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 6 23:55:23.103356 waagent[1783]: 2025-07-06T23:55:23.080759Z INFO Daemon Daemon Running default provisioning handler Jul 6 23:55:23.106721 waagent[1783]: 2025-07-06T23:55:23.106635Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 6 23:55:23.113584 waagent[1783]: 2025-07-06T23:55:23.113510Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 6 23:55:23.122305 waagent[1783]: 2025-07-06T23:55:23.114643Z INFO Daemon Daemon cloud-init is enabled: False Jul 6 23:55:23.122305 waagent[1783]: 2025-07-06T23:55:23.115509Z INFO Daemon Daemon Copying ovf-env.xml Jul 6 23:55:23.184223 waagent[1783]: 2025-07-06T23:55:23.181573Z INFO Daemon Daemon Successfully mounted dvd Jul 6 23:55:23.210535 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 6 23:55:23.212923 waagent[1783]: 2025-07-06T23:55:23.212845Z INFO Daemon Daemon Detect protocol endpoint Jul 6 23:55:23.228275 waagent[1783]: 2025-07-06T23:55:23.214313Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 6 23:55:23.228275 waagent[1783]: 2025-07-06T23:55:23.214794Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 6 23:55:23.228275 waagent[1783]: 2025-07-06T23:55:23.215720Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 6 23:55:23.228275 waagent[1783]: 2025-07-06T23:55:23.216770Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 6 23:55:23.228275 waagent[1783]: 2025-07-06T23:55:23.217690Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 6 23:55:23.253031 waagent[1783]: 2025-07-06T23:55:23.252954Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 6 23:55:23.261301 waagent[1783]: 2025-07-06T23:55:23.254497Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 6 23:55:23.261301 waagent[1783]: 2025-07-06T23:55:23.255223Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 6 23:55:23.391990 waagent[1783]: 2025-07-06T23:55:23.391833Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 6 23:55:23.398080 waagent[1783]: 2025-07-06T23:55:23.393428Z INFO Daemon Daemon Forcing an update of the goal state. Jul 6 23:55:23.398304 waagent[1783]: 2025-07-06T23:55:23.398251Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 6 23:55:23.414245 waagent[1783]: 2025-07-06T23:55:23.414185Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 6 23:55:23.430761 waagent[1783]: 2025-07-06T23:55:23.415849Z INFO Daemon Jul 6 23:55:23.430761 waagent[1783]: 2025-07-06T23:55:23.417961Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 0a99505f-da1a-40b7-b550-adb9471e5e00 eTag: 8469405957042131288 source: Fabric] Jul 6 23:55:23.430761 waagent[1783]: 2025-07-06T23:55:23.419473Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 6 23:55:23.430761 waagent[1783]: 2025-07-06T23:55:23.420608Z INFO Daemon Jul 6 23:55:23.430761 waagent[1783]: 2025-07-06T23:55:23.421635Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 6 23:55:23.433967 waagent[1783]: 2025-07-06T23:55:23.433919Z INFO Daemon Daemon Downloading artifacts profile blob Jul 6 23:55:23.513773 waagent[1783]: 2025-07-06T23:55:23.513683Z INFO Daemon Downloaded certificate {'thumbprint': '648A8DBD843B734F4172D52E0C9EC629075FA246', 'hasPrivateKey': True} Jul 6 23:55:23.519088 waagent[1783]: 2025-07-06T23:55:23.519007Z INFO Daemon Fetch goal state completed Jul 6 23:55:23.529054 waagent[1783]: 2025-07-06T23:55:23.528998Z INFO Daemon Daemon Starting provisioning Jul 6 23:55:23.535956 waagent[1783]: 2025-07-06T23:55:23.530299Z INFO Daemon Daemon Handle ovf-env.xml. Jul 6 23:55:23.535956 waagent[1783]: 2025-07-06T23:55:23.531129Z INFO Daemon Daemon Set hostname [ci-4081.3.4-a-2f8c6d8615] Jul 6 23:55:23.550722 waagent[1783]: 2025-07-06T23:55:23.550631Z INFO Daemon Daemon Publish hostname [ci-4081.3.4-a-2f8c6d8615] Jul 6 23:55:23.558869 waagent[1783]: 2025-07-06T23:55:23.552157Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 6 23:55:23.558869 waagent[1783]: 2025-07-06T23:55:23.553132Z INFO Daemon Daemon Primary interface is [eth0] Jul 6 23:55:23.577531 systemd-networkd[1418]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:23.577540 systemd-networkd[1418]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:55:23.577588 systemd-networkd[1418]: eth0: DHCP lease lost Jul 6 23:55:23.578938 waagent[1783]: 2025-07-06T23:55:23.578830Z INFO Daemon Daemon Create user account if not exists Jul 6 23:55:23.596041 waagent[1783]: 2025-07-06T23:55:23.580795Z INFO Daemon Daemon User core already exists, skip useradd Jul 6 23:55:23.596041 waagent[1783]: 2025-07-06T23:55:23.581517Z INFO Daemon Daemon Configure sudoer Jul 6 23:55:23.596041 waagent[1783]: 2025-07-06T23:55:23.582649Z INFO Daemon Daemon Configure sshd Jul 6 23:55:23.596041 waagent[1783]: 2025-07-06T23:55:23.583423Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 6 23:55:23.596041 waagent[1783]: 2025-07-06T23:55:23.583976Z INFO Daemon Daemon Deploy ssh public key. Jul 6 23:55:23.597123 systemd-networkd[1418]: eth0: DHCPv6 lease lost Jul 6 23:55:23.629096 systemd-networkd[1418]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 6 23:55:24.693256 waagent[1783]: 2025-07-06T23:55:24.693174Z INFO Daemon Daemon Provisioning complete Jul 6 23:55:24.705478 waagent[1783]: 2025-07-06T23:55:24.705405Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 6 23:55:24.712576 waagent[1783]: 2025-07-06T23:55:24.706812Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 6 23:55:24.712576 waagent[1783]: 2025-07-06T23:55:24.708070Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jul 6 23:55:24.833997 waagent[1867]: 2025-07-06T23:55:24.833892Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jul 6 23:55:24.834407 waagent[1867]: 2025-07-06T23:55:24.834089Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.4 Jul 6 23:55:24.834407 waagent[1867]: 2025-07-06T23:55:24.834178Z INFO ExtHandler ExtHandler Python: 3.11.9 Jul 6 23:55:24.868515 waagent[1867]: 2025-07-06T23:55:24.868414Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 6 23:55:24.868747 waagent[1867]: 2025-07-06T23:55:24.868695Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:55:24.868843 waagent[1867]: 2025-07-06T23:55:24.868802Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:55:24.877034 waagent[1867]: 2025-07-06T23:55:24.876947Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 6 23:55:24.882981 waagent[1867]: 2025-07-06T23:55:24.882918Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 6 23:55:24.883476 waagent[1867]: 2025-07-06T23:55:24.883418Z INFO ExtHandler Jul 6 23:55:24.883569 waagent[1867]: 2025-07-06T23:55:24.883516Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e9948f6e-4077-4b54-8f3d-8dbe6b74f610 eTag: 8469405957042131288 source: Fabric] Jul 6 23:55:24.883877 waagent[1867]: 2025-07-06T23:55:24.883825Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 6 23:55:24.884463 waagent[1867]: 2025-07-06T23:55:24.884407Z INFO ExtHandler Jul 6 23:55:24.884526 waagent[1867]: 2025-07-06T23:55:24.884494Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 6 23:55:24.888285 waagent[1867]: 2025-07-06T23:55:24.888242Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 6 23:55:24.949815 waagent[1867]: 2025-07-06T23:55:24.949667Z INFO ExtHandler Downloaded certificate {'thumbprint': '648A8DBD843B734F4172D52E0C9EC629075FA246', 'hasPrivateKey': True} Jul 6 23:55:24.950353 waagent[1867]: 2025-07-06T23:55:24.950291Z INFO ExtHandler Fetch goal state completed Jul 6 23:55:24.964912 waagent[1867]: 2025-07-06T23:55:24.964838Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1867 Jul 6 23:55:24.965098 waagent[1867]: 2025-07-06T23:55:24.965043Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 6 23:55:24.966671 waagent[1867]: 2025-07-06T23:55:24.966611Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.4', '', 'Flatcar Container Linux by Kinvolk'] Jul 6 23:55:24.967027 waagent[1867]: 2025-07-06T23:55:24.966977Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 6 23:55:24.984940 waagent[1867]: 2025-07-06T23:55:24.984886Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 6 23:55:24.985228 waagent[1867]: 2025-07-06T23:55:24.985175Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 6 23:55:24.992634 waagent[1867]: 2025-07-06T23:55:24.992591Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 6 23:55:24.999741 systemd[1]: Reloading requested from client PID 1880 ('systemctl') (unit waagent.service)... Jul 6 23:55:24.999758 systemd[1]: Reloading... Jul 6 23:55:25.078059 zram_generator::config[1910]: No configuration found. Jul 6 23:55:25.208611 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:25.290442 systemd[1]: Reloading finished in 290 ms. Jul 6 23:55:25.316287 waagent[1867]: 2025-07-06T23:55:25.314533Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jul 6 23:55:25.324184 systemd[1]: Reloading requested from client PID 1971 ('systemctl') (unit waagent.service)... Jul 6 23:55:25.324200 systemd[1]: Reloading... Jul 6 23:55:25.402118 zram_generator::config[2001]: No configuration found. Jul 6 23:55:25.541085 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:25.623531 systemd[1]: Reloading finished in 298 ms. Jul 6 23:55:25.652051 waagent[1867]: 2025-07-06T23:55:25.651241Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 6 23:55:25.652051 waagent[1867]: 2025-07-06T23:55:25.651452Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 6 23:55:26.820779 waagent[1867]: 2025-07-06T23:55:26.820677Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 6 23:55:26.821571 waagent[1867]: 2025-07-06T23:55:26.821499Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 6 23:55:26.822492 waagent[1867]: 2025-07-06T23:55:26.822417Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 6 23:55:26.823114 waagent[1867]: 2025-07-06T23:55:26.823057Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:55:26.823250 waagent[1867]: 2025-07-06T23:55:26.823116Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 6 23:55:26.823583 waagent[1867]: 2025-07-06T23:55:26.823524Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:55:26.823747 waagent[1867]: 2025-07-06T23:55:26.823684Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 6 23:55:26.823886 waagent[1867]: 2025-07-06T23:55:26.823818Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 6 23:55:26.824468 waagent[1867]: 2025-07-06T23:55:26.824405Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 6 23:55:26.824615 waagent[1867]: 2025-07-06T23:55:26.824538Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:55:26.824708 waagent[1867]: 2025-07-06T23:55:26.824595Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 6 23:55:26.825011 waagent[1867]: 2025-07-06T23:55:26.824946Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 6 23:55:26.825538 waagent[1867]: 2025-07-06T23:55:26.825484Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:55:26.825671 waagent[1867]: 2025-07-06T23:55:26.825585Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 6 23:55:26.826490 waagent[1867]: 2025-07-06T23:55:26.826434Z INFO EnvHandler ExtHandler Configure routes Jul 6 23:55:26.826887 waagent[1867]: 2025-07-06T23:55:26.826812Z INFO EnvHandler ExtHandler Gateway:None Jul 6 23:55:26.827141 waagent[1867]: 2025-07-06T23:55:26.827095Z INFO EnvHandler ExtHandler Routes:None Jul 6 23:55:26.828321 waagent[1867]: 2025-07-06T23:55:26.828276Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 6 23:55:26.828321 waagent[1867]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 6 23:55:26.828321 waagent[1867]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jul 6 23:55:26.828321 waagent[1867]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 6 23:55:26.828321 waagent[1867]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:55:26.828321 waagent[1867]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:55:26.828321 waagent[1867]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:55:26.839051 waagent[1867]: 2025-07-06T23:55:26.838763Z INFO ExtHandler ExtHandler Jul 6 23:55:26.839051 waagent[1867]: 2025-07-06T23:55:26.838886Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 4aa6930a-6d3d-4da4-b1bb-9f4c35174b1e correlation 7fbd0e75-2972-48ff-8b05-5511c6152a99 created: 2025-07-06T23:54:18.498909Z] Jul 6 23:55:26.839555 waagent[1867]: 2025-07-06T23:55:26.839504Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 6 23:55:26.841289 waagent[1867]: 2025-07-06T23:55:26.840369Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jul 6 23:55:26.873686 waagent[1867]: 2025-07-06T23:55:26.873619Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 440FE797-81BE-443E-9BC7-2DAEBA82D7D4;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jul 6 23:55:26.883217 waagent[1867]: 2025-07-06T23:55:26.883142Z INFO MonitorHandler ExtHandler Network interfaces: Jul 6 23:55:26.883217 waagent[1867]: Executing ['ip', '-a', '-o', 'link']: Jul 6 23:55:26.883217 waagent[1867]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 6 23:55:26.883217 waagent[1867]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:4a:45:64 brd ff:ff:ff:ff:ff:ff Jul 6 23:55:26.883217 waagent[1867]: 3: enP47654s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:4a:45:64 brd ff:ff:ff:ff:ff:ff\ altname enP47654p0s2 Jul 6 23:55:26.883217 waagent[1867]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 6 23:55:26.883217 waagent[1867]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 6 23:55:26.883217 waagent[1867]: 2: eth0 inet 10.200.8.12/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 6 23:55:26.883217 waagent[1867]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 6 23:55:26.883217 waagent[1867]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 6 23:55:26.883217 waagent[1867]: 2: eth0 inet6 fe80::7eed:8dff:fe4a:4564/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 6 23:55:26.883217 waagent[1867]: 3: enP47654s1 inet6 fe80::7eed:8dff:fe4a:4564/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 6 23:55:26.926150 waagent[1867]: 2025-07-06T23:55:26.926085Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jul 6 23:55:26.926150 waagent[1867]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:55:26.926150 waagent[1867]: pkts bytes target prot opt in out source destination Jul 6 23:55:26.926150 waagent[1867]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:55:26.926150 waagent[1867]: pkts bytes target prot opt in out source destination Jul 6 23:55:26.926150 waagent[1867]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:55:26.926150 waagent[1867]: pkts bytes target prot opt in out source destination Jul 6 23:55:26.926150 waagent[1867]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 6 23:55:26.926150 waagent[1867]: 10 1102 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 6 23:55:26.926150 waagent[1867]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 6 23:55:26.929895 waagent[1867]: 2025-07-06T23:55:26.929822Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 6 23:55:26.929895 waagent[1867]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:55:26.929895 waagent[1867]: pkts bytes target prot opt in out source destination Jul 6 23:55:26.929895 waagent[1867]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:55:26.929895 waagent[1867]: pkts bytes target prot opt in out source destination Jul 6 23:55:26.929895 waagent[1867]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:55:26.929895 waagent[1867]: pkts bytes target prot opt in out source destination Jul 6 23:55:26.929895 waagent[1867]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 6 23:55:26.929895 waagent[1867]: 14 1517 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 6 23:55:26.929895 waagent[1867]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 6 23:55:26.930314 waagent[1867]: 2025-07-06T23:55:26.930157Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 6 23:55:32.963350 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:55:32.971260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:33.245841 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:33.260379 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:55:33.719283 kubelet[2101]: E0706 23:55:33.719163 2101 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:55:33.722911 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:55:33.723156 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:55:35.172401 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:55:35.180332 systemd[1]: Started sshd@0-10.200.8.12:22-10.200.16.10:36144.service - OpenSSH per-connection server daemon (10.200.16.10:36144). Jul 6 23:55:35.864929 sshd[2109]: Accepted publickey for core from 10.200.16.10 port 36144 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:55:35.866729 sshd[2109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:35.871636 systemd-logind[1657]: New session 3 of user core. Jul 6 23:55:35.878167 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:55:36.416318 systemd[1]: Started sshd@1-10.200.8.12:22-10.200.16.10:36154.service - OpenSSH per-connection server daemon (10.200.16.10:36154). Jul 6 23:55:37.034871 sshd[2114]: Accepted publickey for core from 10.200.16.10 port 36154 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:55:37.036660 sshd[2114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:37.042248 systemd-logind[1657]: New session 4 of user core. Jul 6 23:55:37.045164 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:55:37.480000 sshd[2114]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:37.484242 systemd[1]: sshd@1-10.200.8.12:22-10.200.16.10:36154.service: Deactivated successfully. Jul 6 23:55:37.486175 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:55:37.486833 systemd-logind[1657]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:55:37.487717 systemd-logind[1657]: Removed session 4. Jul 6 23:55:37.595927 systemd[1]: Started sshd@2-10.200.8.12:22-10.200.16.10:36162.service - OpenSSH per-connection server daemon (10.200.16.10:36162). Jul 6 23:55:38.223893 sshd[2121]: Accepted publickey for core from 10.200.16.10 port 36162 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:55:38.225680 sshd[2121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:38.230649 systemd-logind[1657]: New session 5 of user core. Jul 6 23:55:38.239191 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:55:38.665407 sshd[2121]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:38.668452 systemd[1]: sshd@2-10.200.8.12:22-10.200.16.10:36162.service: Deactivated successfully. Jul 6 23:55:38.670545 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:55:38.671995 systemd-logind[1657]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:55:38.673109 systemd-logind[1657]: Removed session 5. Jul 6 23:55:38.780917 systemd[1]: Started sshd@3-10.200.8.12:22-10.200.16.10:36176.service - OpenSSH per-connection server daemon (10.200.16.10:36176). Jul 6 23:55:39.406209 sshd[2128]: Accepted publickey for core from 10.200.16.10 port 36176 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:55:39.407953 sshd[2128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:39.412770 systemd-logind[1657]: New session 6 of user core. Jul 6 23:55:39.420194 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:55:39.862801 sshd[2128]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:39.867324 systemd[1]: sshd@3-10.200.8.12:22-10.200.16.10:36176.service: Deactivated successfully. Jul 6 23:55:39.869477 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:55:39.870168 systemd-logind[1657]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:55:39.871061 systemd-logind[1657]: Removed session 6. Jul 6 23:55:39.973467 systemd[1]: Started sshd@4-10.200.8.12:22-10.200.16.10:52972.service - OpenSSH per-connection server daemon (10.200.16.10:52972). Jul 6 23:55:40.599173 sshd[2135]: Accepted publickey for core from 10.200.16.10 port 52972 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:55:40.600883 sshd[2135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:40.605868 systemd-logind[1657]: New session 7 of user core. Jul 6 23:55:40.613182 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:55:41.053994 sudo[2138]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:55:41.054379 sudo[2138]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:55:41.085871 sudo[2138]: pam_unix(sudo:session): session closed for user root Jul 6 23:55:41.186913 sshd[2135]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:41.191769 systemd[1]: sshd@4-10.200.8.12:22-10.200.16.10:52972.service: Deactivated successfully. Jul 6 23:55:41.193967 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:55:41.194918 systemd-logind[1657]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:55:41.195938 systemd-logind[1657]: Removed session 7. Jul 6 23:55:41.301558 systemd[1]: Started sshd@5-10.200.8.12:22-10.200.16.10:52980.service - OpenSSH per-connection server daemon (10.200.16.10:52980). Jul 6 23:55:41.922985 sshd[2143]: Accepted publickey for core from 10.200.16.10 port 52980 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:55:41.924820 sshd[2143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:41.930298 systemd-logind[1657]: New session 8 of user core. Jul 6 23:55:41.935195 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:55:42.269532 sudo[2147]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:55:42.270202 sudo[2147]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:55:42.273375 sudo[2147]: pam_unix(sudo:session): session closed for user root Jul 6 23:55:42.278261 sudo[2146]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 6 23:55:42.278603 sudo[2146]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:55:42.291340 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 6 23:55:42.293160 auditctl[2150]: No rules Jul 6 23:55:42.293516 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:55:42.293721 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 6 23:55:42.296240 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:55:42.328096 augenrules[2168]: No rules Jul 6 23:55:42.329540 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:55:42.331008 sudo[2146]: pam_unix(sudo:session): session closed for user root Jul 6 23:55:42.432142 sshd[2143]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:42.435667 systemd[1]: sshd@5-10.200.8.12:22-10.200.16.10:52980.service: Deactivated successfully. Jul 6 23:55:42.437939 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:55:42.439806 systemd-logind[1657]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:55:42.440734 systemd-logind[1657]: Removed session 8. Jul 6 23:55:42.542534 systemd[1]: Started sshd@6-10.200.8.12:22-10.200.16.10:52988.service - OpenSSH per-connection server daemon (10.200.16.10:52988). Jul 6 23:55:43.168234 sshd[2176]: Accepted publickey for core from 10.200.16.10 port 52988 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:55:43.169991 sshd[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:43.175146 systemd-logind[1657]: New session 9 of user core. Jul 6 23:55:43.181181 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:55:43.512873 sudo[2179]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:55:43.513250 sudo[2179]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:55:43.805679 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:55:43.812233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:44.341522 chronyd[1670]: Selected source PHC0 Jul 6 23:55:44.624381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:44.628970 (kubelet)[2198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:55:44.679517 kubelet[2198]: E0706 23:55:44.679474 2198 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:55:44.682042 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:55:44.682301 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:55:45.095338 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:55:45.096770 (dockerd)[2210]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:55:46.388564 dockerd[2210]: time="2025-07-06T23:55:46.388498019Z" level=info msg="Starting up" Jul 6 23:55:46.738006 dockerd[2210]: time="2025-07-06T23:55:46.737861619Z" level=info msg="Loading containers: start." Jul 6 23:55:46.908179 kernel: Initializing XFRM netlink socket Jul 6 23:55:47.028566 systemd-networkd[1418]: docker0: Link UP Jul 6 23:55:47.057376 dockerd[2210]: time="2025-07-06T23:55:47.057332319Z" level=info msg="Loading containers: done." Jul 6 23:55:47.110181 dockerd[2210]: time="2025-07-06T23:55:47.110124319Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:55:47.110361 dockerd[2210]: time="2025-07-06T23:55:47.110251219Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 6 23:55:47.110421 dockerd[2210]: time="2025-07-06T23:55:47.110401019Z" level=info msg="Daemon has completed initialization" Jul 6 23:55:47.161677 dockerd[2210]: time="2025-07-06T23:55:47.161592419Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:55:47.162189 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:55:48.561199 containerd[1675]: time="2025-07-06T23:55:48.561160119Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 6 23:55:49.298915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2513882336.mount: Deactivated successfully. Jul 6 23:55:50.918907 containerd[1675]: time="2025-07-06T23:55:50.918846519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:50.922274 containerd[1675]: time="2025-07-06T23:55:50.922231519Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799053" Jul 6 23:55:50.926095 containerd[1675]: time="2025-07-06T23:55:50.926034819Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:50.932814 containerd[1675]: time="2025-07-06T23:55:50.932748119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:50.935872 containerd[1675]: time="2025-07-06T23:55:50.935825119Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.3746244s" Jul 6 23:55:50.935994 containerd[1675]: time="2025-07-06T23:55:50.935881819Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 6 23:55:50.938611 containerd[1675]: time="2025-07-06T23:55:50.938480419Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 6 23:55:52.374424 containerd[1675]: time="2025-07-06T23:55:52.374358496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:52.376651 containerd[1675]: time="2025-07-06T23:55:52.376586763Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783920" Jul 6 23:55:52.380337 containerd[1675]: time="2025-07-06T23:55:52.380287339Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:52.385902 containerd[1675]: time="2025-07-06T23:55:52.385849205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:52.386963 containerd[1675]: time="2025-07-06T23:55:52.386817608Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.448297386s" Jul 6 23:55:52.386963 containerd[1675]: time="2025-07-06T23:55:52.386862017Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 6 23:55:52.387796 containerd[1675]: time="2025-07-06T23:55:52.387771708Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 6 23:55:53.839122 containerd[1675]: time="2025-07-06T23:55:53.839064875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:53.841991 containerd[1675]: time="2025-07-06T23:55:53.841923437Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176924" Jul 6 23:55:53.848138 containerd[1675]: time="2025-07-06T23:55:53.848083270Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:53.852978 containerd[1675]: time="2025-07-06T23:55:53.852924875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:53.853938 containerd[1675]: time="2025-07-06T23:55:53.853897696Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.466033171s" Jul 6 23:55:53.854014 containerd[1675]: time="2025-07-06T23:55:53.853942697Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 6 23:55:53.854785 containerd[1675]: time="2025-07-06T23:55:53.854615411Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 6 23:55:54.805964 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 6 23:55:54.816359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:54.977193 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:54.985936 (kubelet)[2417]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:55:55.249113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3813424082.mount: Deactivated successfully. Jul 6 23:55:55.625924 kubelet[2417]: E0706 23:55:55.625815 2417 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:55:55.628635 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:55:55.628858 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:55:56.141890 containerd[1675]: time="2025-07-06T23:55:56.141809494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:56.148519 containerd[1675]: time="2025-07-06T23:55:56.148439337Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895371" Jul 6 23:55:56.151924 containerd[1675]: time="2025-07-06T23:55:56.151841711Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:56.162789 containerd[1675]: time="2025-07-06T23:55:56.162713646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:56.163921 containerd[1675]: time="2025-07-06T23:55:56.163453062Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 2.30880075s" Jul 6 23:55:56.163921 containerd[1675]: time="2025-07-06T23:55:56.163497563Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 6 23:55:56.164218 containerd[1675]: time="2025-07-06T23:55:56.164195978Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:55:56.686935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1025691111.mount: Deactivated successfully. Jul 6 23:55:58.107038 containerd[1675]: time="2025-07-06T23:55:58.106974809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:58.109683 containerd[1675]: time="2025-07-06T23:55:58.109617666Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 6 23:55:58.115064 containerd[1675]: time="2025-07-06T23:55:58.113950260Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:58.122699 containerd[1675]: time="2025-07-06T23:55:58.122650448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:58.124283 containerd[1675]: time="2025-07-06T23:55:58.123708871Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.959444592s" Jul 6 23:55:58.124283 containerd[1675]: time="2025-07-06T23:55:58.123753572Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 6 23:55:58.124653 containerd[1675]: time="2025-07-06T23:55:58.124625791Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:55:58.614496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2603846839.mount: Deactivated successfully. Jul 6 23:55:58.638546 containerd[1675]: time="2025-07-06T23:55:58.638491508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:58.640590 containerd[1675]: time="2025-07-06T23:55:58.640526452Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 6 23:55:58.644349 containerd[1675]: time="2025-07-06T23:55:58.644279434Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:58.649065 containerd[1675]: time="2025-07-06T23:55:58.648995036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:58.649830 containerd[1675]: time="2025-07-06T23:55:58.649686251Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 525.023858ms" Jul 6 23:55:58.649830 containerd[1675]: time="2025-07-06T23:55:58.649724851Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:55:58.650538 containerd[1675]: time="2025-07-06T23:55:58.650505168Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 6 23:55:59.614671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3620907171.mount: Deactivated successfully. Jul 6 23:56:01.890108 containerd[1675]: time="2025-07-06T23:56:01.890048795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:01.894875 containerd[1675]: time="2025-07-06T23:56:01.894798718Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Jul 6 23:56:01.901742 containerd[1675]: time="2025-07-06T23:56:01.901666297Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:01.906375 containerd[1675]: time="2025-07-06T23:56:01.906299418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:01.907600 containerd[1675]: time="2025-07-06T23:56:01.907393546Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.256845477s" Jul 6 23:56:01.907600 containerd[1675]: time="2025-07-06T23:56:01.907436747Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 6 23:56:04.504152 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:04.516381 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:04.543283 systemd[1]: Reloading requested from client PID 2567 ('systemctl') (unit session-9.scope)... Jul 6 23:56:04.543319 systemd[1]: Reloading... Jul 6 23:56:04.647054 zram_generator::config[2610]: No configuration found. Jul 6 23:56:04.765760 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:56:04.851943 systemd[1]: Reloading finished in 308 ms. Jul 6 23:56:05.099989 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:56:05.100160 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:56:05.100510 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:05.111055 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jul 6 23:56:05.116785 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:05.995173 update_engine[1658]: I20250706 23:56:05.995082 1658 update_attempter.cc:509] Updating boot flags... Jul 6 23:56:07.038086 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2681) Jul 6 23:56:07.168062 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2684) Jul 6 23:56:07.468809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:07.480365 (kubelet)[2740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:56:07.517466 kubelet[2740]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:56:07.517886 kubelet[2740]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:56:07.517886 kubelet[2740]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:56:07.517886 kubelet[2740]: I0706 23:56:07.517577 2740 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:56:08.381837 kubelet[2740]: I0706 23:56:08.381788 2740 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:56:08.381837 kubelet[2740]: I0706 23:56:08.381819 2740 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:56:08.382233 kubelet[2740]: I0706 23:56:08.382204 2740 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:56:08.406980 kubelet[2740]: E0706 23:56:08.406930 2740 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:08.408072 kubelet[2740]: I0706 23:56:08.407987 2740 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:56:08.419111 kubelet[2740]: E0706 23:56:08.419060 2740 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:56:08.419111 kubelet[2740]: I0706 23:56:08.419101 2740 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:56:08.422544 kubelet[2740]: I0706 23:56:08.422519 2740 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:56:08.422813 kubelet[2740]: I0706 23:56:08.422775 2740 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:56:08.422999 kubelet[2740]: I0706 23:56:08.422808 2740 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-a-2f8c6d8615","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:56:08.423704 kubelet[2740]: I0706 23:56:08.423678 2740 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:56:08.423704 kubelet[2740]: I0706 23:56:08.423706 2740 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:56:08.423858 kubelet[2740]: I0706 23:56:08.423837 2740 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:56:08.426753 kubelet[2740]: I0706 23:56:08.426732 2740 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:56:08.426837 kubelet[2740]: I0706 23:56:08.426763 2740 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:56:08.426837 kubelet[2740]: I0706 23:56:08.426789 2740 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:56:08.426837 kubelet[2740]: I0706 23:56:08.426804 2740 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:56:08.435012 kubelet[2740]: W0706 23:56:08.434252 2740 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jul 6 23:56:08.435012 kubelet[2740]: E0706 23:56:08.434315 2740 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:08.435012 kubelet[2740]: W0706 23:56:08.434652 2740 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-2f8c6d8615&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jul 6 23:56:08.435012 kubelet[2740]: E0706 23:56:08.434697 2740 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-2f8c6d8615&limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:08.435285 kubelet[2740]: I0706 23:56:08.435261 2740 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:56:08.436600 kubelet[2740]: I0706 23:56:08.435635 2740 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:56:08.436600 kubelet[2740]: W0706 23:56:08.435718 2740 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:56:08.437899 kubelet[2740]: I0706 23:56:08.437877 2740 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:56:08.437970 kubelet[2740]: I0706 23:56:08.437923 2740 server.go:1287] "Started kubelet" Jul 6 23:56:08.438092 kubelet[2740]: I0706 23:56:08.438062 2740 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:56:08.439515 kubelet[2740]: I0706 23:56:08.438877 2740 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:56:08.441606 kubelet[2740]: I0706 23:56:08.441584 2740 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:56:08.444898 kubelet[2740]: I0706 23:56:08.444839 2740 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:56:08.445244 kubelet[2740]: I0706 23:56:08.445228 2740 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:56:08.447141 kubelet[2740]: E0706 23:56:08.445520 2740 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.4-a-2f8c6d8615.184fcecefadea75a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.4-a-2f8c6d8615,UID:ci-4081.3.4-a-2f8c6d8615,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.4-a-2f8c6d8615,},FirstTimestamp:2025-07-06 23:56:08.437892954 +0000 UTC m=+0.953912020,LastTimestamp:2025-07-06 23:56:08.437892954 +0000 UTC m=+0.953912020,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.4-a-2f8c6d8615,}" Jul 6 23:56:08.448597 kubelet[2740]: I0706 23:56:08.448442 2740 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:56:08.449065 kubelet[2740]: I0706 23:56:08.448870 2740 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:56:08.449143 kubelet[2740]: E0706 23:56:08.449105 2740 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-2f8c6d8615\" not found" Jul 6 23:56:08.450848 kubelet[2740]: I0706 23:56:08.450828 2740 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:56:08.450963 kubelet[2740]: I0706 23:56:08.450941 2740 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:56:08.452169 kubelet[2740]: W0706 23:56:08.452116 2740 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jul 6 23:56:08.452246 kubelet[2740]: E0706 23:56:08.452179 2740 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:08.452294 kubelet[2740]: E0706 23:56:08.452266 2740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-2f8c6d8615?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="200ms" Jul 6 23:56:08.452901 kubelet[2740]: I0706 23:56:08.452682 2740 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:56:08.455047 kubelet[2740]: I0706 23:56:08.454367 2740 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:56:08.455047 kubelet[2740]: I0706 23:56:08.454385 2740 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:56:08.476821 kubelet[2740]: E0706 23:56:08.476782 2740 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:56:08.482692 kubelet[2740]: I0706 23:56:08.482661 2740 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:56:08.484310 kubelet[2740]: I0706 23:56:08.484279 2740 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:56:08.484310 kubelet[2740]: I0706 23:56:08.484312 2740 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:56:08.484428 kubelet[2740]: I0706 23:56:08.484333 2740 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:56:08.484428 kubelet[2740]: I0706 23:56:08.484343 2740 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:56:08.484428 kubelet[2740]: E0706 23:56:08.484391 2740 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:56:08.488468 kubelet[2740]: W0706 23:56:08.488441 2740 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jul 6 23:56:08.488555 kubelet[2740]: E0706 23:56:08.488535 2740 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:08.516373 kubelet[2740]: I0706 23:56:08.516327 2740 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:56:08.516373 kubelet[2740]: I0706 23:56:08.516354 2740 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:56:08.516607 kubelet[2740]: I0706 23:56:08.516418 2740 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:56:08.521649 kubelet[2740]: I0706 23:56:08.521619 2740 policy_none.go:49] "None policy: Start" Jul 6 23:56:08.521649 kubelet[2740]: I0706 23:56:08.521643 2740 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:56:08.521649 kubelet[2740]: I0706 23:56:08.521656 2740 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:56:08.532776 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:56:08.548110 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:56:08.549307 kubelet[2740]: E0706 23:56:08.549251 2740 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-2f8c6d8615\" not found" Jul 6 23:56:08.551490 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:56:08.561947 kubelet[2740]: I0706 23:56:08.561734 2740 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:56:08.562316 kubelet[2740]: I0706 23:56:08.561961 2740 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:56:08.562316 kubelet[2740]: I0706 23:56:08.561977 2740 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:56:08.562316 kubelet[2740]: I0706 23:56:08.562301 2740 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:56:08.564000 kubelet[2740]: E0706 23:56:08.563968 2740 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:56:08.564159 kubelet[2740]: E0706 23:56:08.564040 2740 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.4-a-2f8c6d8615\" not found" Jul 6 23:56:08.594823 systemd[1]: Created slice kubepods-burstable-podf3884e376619d98422b0d85dbbc639e0.slice - libcontainer container kubepods-burstable-podf3884e376619d98422b0d85dbbc639e0.slice. Jul 6 23:56:08.612413 kubelet[2740]: E0706 23:56:08.612363 2740 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-2f8c6d8615\" not found" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:08.616274 systemd[1]: Created slice kubepods-burstable-podf931ae0a263cbbdc711120f299dcb756.slice - libcontainer container kubepods-burstable-podf931ae0a263cbbdc711120f299dcb756.slice. Jul 6 23:56:08.618455 kubelet[2740]: E0706 23:56:08.618433 2740 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-2f8c6d8615\" not found" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:08.620379 systemd[1]: Created slice kubepods-burstable-podfe1b66999c13503d8db9904c686170cc.slice - libcontainer container kubepods-burstable-podfe1b66999c13503d8db9904c686170cc.slice. Jul 6 23:56:08.621983 kubelet[2740]: E0706 23:56:08.621960 2740 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-2f8c6d8615\" not found" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:08.652880 kubelet[2740]: E0706 23:56:08.652737 2740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-2f8c6d8615?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="400ms" Jul 6 23:56:08.664580 kubelet[2740]: I0706 23:56:08.664546 2740 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:08.664941 kubelet[2740]: E0706 23:56:08.664914 2740 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:08.752251 kubelet[2740]: I0706 23:56:08.752199 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f931ae0a263cbbdc711120f299dcb756-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-a-2f8c6d8615\" (UID: \"f931ae0a263cbbdc711120f299dcb756\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:08.752251 kubelet[2740]: I0706 23:56:08.752263 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f931ae0a263cbbdc711120f299dcb756-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-a-2f8c6d8615\" (UID: \"f931ae0a263cbbdc711120f299dcb756\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:08.752679 kubelet[2740]: I0706 23:56:08.752291 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3884e376619d98422b0d85dbbc639e0-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-a-2f8c6d8615\" (UID: \"f3884e376619d98422b0d85dbbc639e0\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:08.752679 kubelet[2740]: I0706 23:56:08.752318 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3884e376619d98422b0d85dbbc639e0-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-a-2f8c6d8615\" (UID: \"f3884e376619d98422b0d85dbbc639e0\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:08.752679 kubelet[2740]: I0706 23:56:08.752353 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3884e376619d98422b0d85dbbc639e0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-a-2f8c6d8615\" (UID: \"f3884e376619d98422b0d85dbbc639e0\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:08.752679 kubelet[2740]: I0706 23:56:08.752383 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe1b66999c13503d8db9904c686170cc-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-a-2f8c6d8615\" (UID: \"fe1b66999c13503d8db9904c686170cc\") " pod="kube-system/kube-scheduler-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:08.752679 kubelet[2740]: I0706 23:56:08.752409 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f931ae0a263cbbdc711120f299dcb756-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-2f8c6d8615\" (UID: \"f931ae0a263cbbdc711120f299dcb756\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:08.753014 kubelet[2740]: I0706 23:56:08.752438 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f931ae0a263cbbdc711120f299dcb756-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-a-2f8c6d8615\" (UID: \"f931ae0a263cbbdc711120f299dcb756\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:08.753014 kubelet[2740]: I0706 23:56:08.752467 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f931ae0a263cbbdc711120f299dcb756-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-2f8c6d8615\" (UID: \"f931ae0a263cbbdc711120f299dcb756\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:08.867938 kubelet[2740]: I0706 23:56:08.867901 2740 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:08.868339 kubelet[2740]: E0706 23:56:08.868306 2740 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:08.913923 containerd[1675]: time="2025-07-06T23:56:08.913778692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-a-2f8c6d8615,Uid:f3884e376619d98422b0d85dbbc639e0,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:08.920401 containerd[1675]: time="2025-07-06T23:56:08.920363402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-a-2f8c6d8615,Uid:f931ae0a263cbbdc711120f299dcb756,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:08.923137 containerd[1675]: time="2025-07-06T23:56:08.922912583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-a-2f8c6d8615,Uid:fe1b66999c13503d8db9904c686170cc,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:09.054151 kubelet[2740]: E0706 23:56:09.054093 2740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-2f8c6d8615?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="800ms" Jul 6 23:56:09.271243 kubelet[2740]: I0706 23:56:09.271125 2740 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:09.272396 kubelet[2740]: E0706 23:56:09.272341 2740 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:09.428833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2080240392.mount: Deactivated successfully. Jul 6 23:56:09.464991 containerd[1675]: time="2025-07-06T23:56:09.464927025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:56:09.467454 containerd[1675]: time="2025-07-06T23:56:09.467349702Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jul 6 23:56:09.471285 containerd[1675]: time="2025-07-06T23:56:09.471247326Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:56:09.475164 kubelet[2740]: W0706 23:56:09.475132 2740 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jul 6 23:56:09.475256 kubelet[2740]: E0706 23:56:09.475177 2740 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:09.475467 containerd[1675]: time="2025-07-06T23:56:09.475433259Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:56:09.479070 containerd[1675]: time="2025-07-06T23:56:09.479012673Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:56:09.482494 containerd[1675]: time="2025-07-06T23:56:09.482456782Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:56:09.485506 containerd[1675]: time="2025-07-06T23:56:09.485230971Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:56:09.490225 containerd[1675]: time="2025-07-06T23:56:09.490193828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:56:09.490971 containerd[1675]: time="2025-07-06T23:56:09.490935652Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 567.950767ms" Jul 6 23:56:09.492875 containerd[1675]: time="2025-07-06T23:56:09.492837713Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 572.402108ms" Jul 6 23:56:09.493349 containerd[1675]: time="2025-07-06T23:56:09.493320828Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 579.453833ms" Jul 6 23:56:09.717311 kubelet[2740]: W0706 23:56:09.717238 2740 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-2f8c6d8615&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jul 6 23:56:09.717772 kubelet[2740]: E0706 23:56:09.717333 2740 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-2f8c6d8615&limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:09.855504 kubelet[2740]: E0706 23:56:09.855451 2740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-2f8c6d8615?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="1.6s" Jul 6 23:56:10.002818 kubelet[2740]: W0706 23:56:10.002647 2740 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jul 6 23:56:10.002818 kubelet[2740]: E0706 23:56:10.002732 2740 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:10.027627 kubelet[2740]: W0706 23:56:10.027560 2740 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Jul 6 23:56:10.027627 kubelet[2740]: E0706 23:56:10.027635 2740 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:10.074889 kubelet[2740]: I0706 23:56:10.074853 2740 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:10.075270 kubelet[2740]: E0706 23:56:10.075236 2740 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:10.144422 containerd[1675]: time="2025-07-06T23:56:10.144319037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:10.145596 containerd[1675]: time="2025-07-06T23:56:10.145071561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:10.145596 containerd[1675]: time="2025-07-06T23:56:10.145112162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:10.145596 containerd[1675]: time="2025-07-06T23:56:10.145241166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:10.147566 containerd[1675]: time="2025-07-06T23:56:10.147474137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:10.149215 containerd[1675]: time="2025-07-06T23:56:10.148787679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:10.149215 containerd[1675]: time="2025-07-06T23:56:10.148845181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:10.149215 containerd[1675]: time="2025-07-06T23:56:10.148878482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:10.149215 containerd[1675]: time="2025-07-06T23:56:10.148977585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:10.149215 containerd[1675]: time="2025-07-06T23:56:10.147543939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:10.149215 containerd[1675]: time="2025-07-06T23:56:10.147701844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:10.149215 containerd[1675]: time="2025-07-06T23:56:10.147798247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:10.197243 systemd[1]: Started cri-containerd-d304c015a4f801dd76b90b6820f080252de67a12acf3b788b738ed577a1bbb9b.scope - libcontainer container d304c015a4f801dd76b90b6820f080252de67a12acf3b788b738ed577a1bbb9b. Jul 6 23:56:10.203886 systemd[1]: Started cri-containerd-0f2101458d7adae1f7e8cce81b6c5f3929944f0fea644a67511952d6098ef034.scope - libcontainer container 0f2101458d7adae1f7e8cce81b6c5f3929944f0fea644a67511952d6098ef034. Jul 6 23:56:10.206732 systemd[1]: Started cri-containerd-44f168b0fbeb66afa156ad528b0867fe1768307da30d623cea31b65caaa9bf00.scope - libcontainer container 44f168b0fbeb66afa156ad528b0867fe1768307da30d623cea31b65caaa9bf00. Jul 6 23:56:10.281691 containerd[1675]: time="2025-07-06T23:56:10.281530802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-a-2f8c6d8615,Uid:f931ae0a263cbbdc711120f299dcb756,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f2101458d7adae1f7e8cce81b6c5f3929944f0fea644a67511952d6098ef034\"" Jul 6 23:56:10.291672 containerd[1675]: time="2025-07-06T23:56:10.291524619Z" level=info msg="CreateContainer within sandbox \"0f2101458d7adae1f7e8cce81b6c5f3929944f0fea644a67511952d6098ef034\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:56:10.292494 containerd[1675]: time="2025-07-06T23:56:10.292457549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-a-2f8c6d8615,Uid:f3884e376619d98422b0d85dbbc639e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d304c015a4f801dd76b90b6820f080252de67a12acf3b788b738ed577a1bbb9b\"" Jul 6 23:56:10.296102 containerd[1675]: time="2025-07-06T23:56:10.296003462Z" level=info msg="CreateContainer within sandbox \"d304c015a4f801dd76b90b6820f080252de67a12acf3b788b738ed577a1bbb9b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:56:10.300380 containerd[1675]: time="2025-07-06T23:56:10.300158094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-a-2f8c6d8615,Uid:fe1b66999c13503d8db9904c686170cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"44f168b0fbeb66afa156ad528b0867fe1768307da30d623cea31b65caaa9bf00\"" Jul 6 23:56:10.303277 containerd[1675]: time="2025-07-06T23:56:10.303246592Z" level=info msg="CreateContainer within sandbox \"44f168b0fbeb66afa156ad528b0867fe1768307da30d623cea31b65caaa9bf00\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:56:10.376952 containerd[1675]: time="2025-07-06T23:56:10.376697029Z" level=info msg="CreateContainer within sandbox \"d304c015a4f801dd76b90b6820f080252de67a12acf3b788b738ed577a1bbb9b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e2e79b275a5625c2502d234def0453911bcb76bc1cd933f22fce3af2c0344f29\"" Jul 6 23:56:10.380715 containerd[1675]: time="2025-07-06T23:56:10.380671855Z" level=info msg="CreateContainer within sandbox \"0f2101458d7adae1f7e8cce81b6c5f3929944f0fea644a67511952d6098ef034\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"929b196fd65391b5e2f4c90488f432d207faced3b2643a9d82166451d1f6131f\"" Jul 6 23:56:10.381047 containerd[1675]: time="2025-07-06T23:56:10.380953564Z" level=info msg="StartContainer for \"e2e79b275a5625c2502d234def0453911bcb76bc1cd933f22fce3af2c0344f29\"" Jul 6 23:56:10.386644 containerd[1675]: time="2025-07-06T23:56:10.385161498Z" level=info msg="CreateContainer within sandbox \"44f168b0fbeb66afa156ad528b0867fe1768307da30d623cea31b65caaa9bf00\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9be1f31a4287c59c3ade148430cb482b4ac577f16c18149bf7ecdf2af484fb4d\"" Jul 6 23:56:10.386644 containerd[1675]: time="2025-07-06T23:56:10.385365905Z" level=info msg="StartContainer for \"929b196fd65391b5e2f4c90488f432d207faced3b2643a9d82166451d1f6131f\"" Jul 6 23:56:10.392933 containerd[1675]: time="2025-07-06T23:56:10.392896544Z" level=info msg="StartContainer for \"9be1f31a4287c59c3ade148430cb482b4ac577f16c18149bf7ecdf2af484fb4d\"" Jul 6 23:56:10.448247 systemd[1]: Started cri-containerd-929b196fd65391b5e2f4c90488f432d207faced3b2643a9d82166451d1f6131f.scope - libcontainer container 929b196fd65391b5e2f4c90488f432d207faced3b2643a9d82166451d1f6131f. Jul 6 23:56:10.450583 systemd[1]: Started cri-containerd-e2e79b275a5625c2502d234def0453911bcb76bc1cd933f22fce3af2c0344f29.scope - libcontainer container e2e79b275a5625c2502d234def0453911bcb76bc1cd933f22fce3af2c0344f29. Jul 6 23:56:10.463219 systemd[1]: Started cri-containerd-9be1f31a4287c59c3ade148430cb482b4ac577f16c18149bf7ecdf2af484fb4d.scope - libcontainer container 9be1f31a4287c59c3ade148430cb482b4ac577f16c18149bf7ecdf2af484fb4d. Jul 6 23:56:10.536576 containerd[1675]: time="2025-07-06T23:56:10.536100100Z" level=info msg="StartContainer for \"e2e79b275a5625c2502d234def0453911bcb76bc1cd933f22fce3af2c0344f29\" returns successfully" Jul 6 23:56:10.541151 containerd[1675]: time="2025-07-06T23:56:10.540358835Z" level=info msg="StartContainer for \"929b196fd65391b5e2f4c90488f432d207faced3b2643a9d82166451d1f6131f\" returns successfully" Jul 6 23:56:10.587120 containerd[1675]: time="2025-07-06T23:56:10.586932317Z" level=info msg="StartContainer for \"9be1f31a4287c59c3ade148430cb482b4ac577f16c18149bf7ecdf2af484fb4d\" returns successfully" Jul 6 23:56:10.604333 kubelet[2740]: E0706 23:56:10.604255 2740 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:56:11.508189 kubelet[2740]: E0706 23:56:11.507699 2740 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-2f8c6d8615\" not found" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:11.512174 kubelet[2740]: E0706 23:56:11.511599 2740 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-2f8c6d8615\" not found" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:11.516083 kubelet[2740]: E0706 23:56:11.515136 2740 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-2f8c6d8615\" not found" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:11.679960 kubelet[2740]: I0706 23:56:11.679209 2740 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:12.517420 kubelet[2740]: E0706 23:56:12.517379 2740 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-2f8c6d8615\" not found" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:12.519530 kubelet[2740]: E0706 23:56:12.518360 2740 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-2f8c6d8615\" not found" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:12.519726 kubelet[2740]: E0706 23:56:12.518568 2740 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-2f8c6d8615\" not found" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:12.838087 kubelet[2740]: E0706 23:56:12.838029 2740 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.4-a-2f8c6d8615\" not found" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:12.898479 kubelet[2740]: I0706 23:56:12.898257 2740 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:12.950481 kubelet[2740]: I0706 23:56:12.950427 2740 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:12.977457 kubelet[2740]: E0706 23:56:12.977417 2740 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.4-a-2f8c6d8615\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:12.977457 kubelet[2740]: I0706 23:56:12.977470 2740 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:12.979841 kubelet[2740]: E0706 23:56:12.979655 2740 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.4-a-2f8c6d8615\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:12.979841 kubelet[2740]: I0706 23:56:12.979691 2740 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:12.981352 kubelet[2740]: E0706 23:56:12.981323 2740 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.4-a-2f8c6d8615\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:13.431002 kubelet[2740]: I0706 23:56:13.430965 2740 apiserver.go:52] "Watching apiserver" Jul 6 23:56:13.451869 kubelet[2740]: I0706 23:56:13.451837 2740 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:56:13.517224 kubelet[2740]: I0706 23:56:13.517047 2740 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:13.517224 kubelet[2740]: I0706 23:56:13.517083 2740 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:13.519611 kubelet[2740]: E0706 23:56:13.519568 2740 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.4-a-2f8c6d8615\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:13.520107 kubelet[2740]: E0706 23:56:13.520090 2740 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.4-a-2f8c6d8615\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:15.108843 systemd[1]: Reloading requested from client PID 3014 ('systemctl') (unit session-9.scope)... Jul 6 23:56:15.108858 systemd[1]: Reloading... Jul 6 23:56:15.197109 zram_generator::config[3050]: No configuration found. Jul 6 23:56:15.329775 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:56:15.437785 systemd[1]: Reloading finished in 328 ms. Jul 6 23:56:15.487518 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:15.501374 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:56:15.501602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:15.501666 systemd[1]: kubelet.service: Consumed 1.251s CPU time, 131.0M memory peak, 0B memory swap peak. Jul 6 23:56:15.507351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:16.185411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:16.197594 (kubelet)[3121]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:56:16.276301 kubelet[3121]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:56:16.276301 kubelet[3121]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:56:16.276301 kubelet[3121]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:56:16.276760 kubelet[3121]: I0706 23:56:16.276421 3121 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:56:16.286305 kubelet[3121]: I0706 23:56:16.286264 3121 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:56:16.286305 kubelet[3121]: I0706 23:56:16.286299 3121 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:56:16.289587 kubelet[3121]: I0706 23:56:16.289554 3121 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:56:16.292878 kubelet[3121]: I0706 23:56:16.292690 3121 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:56:16.297276 kubelet[3121]: I0706 23:56:16.295752 3121 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:56:16.300307 kubelet[3121]: E0706 23:56:16.300237 3121 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:56:16.300307 kubelet[3121]: I0706 23:56:16.300269 3121 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:56:16.306895 kubelet[3121]: I0706 23:56:16.306867 3121 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:56:16.307227 kubelet[3121]: I0706 23:56:16.307181 3121 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:56:16.307527 kubelet[3121]: I0706 23:56:16.307229 3121 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-a-2f8c6d8615","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:56:16.308706 kubelet[3121]: I0706 23:56:16.307542 3121 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:56:16.308706 kubelet[3121]: I0706 23:56:16.307558 3121 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:56:16.308706 kubelet[3121]: I0706 23:56:16.307640 3121 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:56:16.308706 kubelet[3121]: I0706 23:56:16.307833 3121 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:56:16.309124 kubelet[3121]: I0706 23:56:16.307865 3121 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:56:16.309201 kubelet[3121]: I0706 23:56:16.309165 3121 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:56:16.309201 kubelet[3121]: I0706 23:56:16.309182 3121 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:56:16.312176 kubelet[3121]: I0706 23:56:16.312157 3121 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:56:16.312850 kubelet[3121]: I0706 23:56:16.312641 3121 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:56:16.313432 kubelet[3121]: I0706 23:56:16.313188 3121 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:56:16.313432 kubelet[3121]: I0706 23:56:16.313223 3121 server.go:1287] "Started kubelet" Jul 6 23:56:16.319429 kubelet[3121]: I0706 23:56:16.319247 3121 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:56:16.323748 kubelet[3121]: I0706 23:56:16.323688 3121 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:56:16.325856 kubelet[3121]: I0706 23:56:16.324043 3121 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:56:16.328110 kubelet[3121]: I0706 23:56:16.328094 3121 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:56:16.335509 kubelet[3121]: I0706 23:56:16.334518 3121 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:56:16.349464 kubelet[3121]: I0706 23:56:16.349430 3121 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:56:16.354496 kubelet[3121]: I0706 23:56:16.352703 3121 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:56:16.354496 kubelet[3121]: E0706 23:56:16.352930 3121 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-2f8c6d8615\" not found" Jul 6 23:56:16.360381 kubelet[3121]: I0706 23:56:16.357610 3121 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:56:16.360381 kubelet[3121]: I0706 23:56:16.357746 3121 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:56:16.361684 kubelet[3121]: I0706 23:56:16.361643 3121 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:56:16.362986 kubelet[3121]: I0706 23:56:16.362962 3121 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:56:16.363090 kubelet[3121]: I0706 23:56:16.363000 3121 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:56:16.363090 kubelet[3121]: I0706 23:56:16.363031 3121 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:56:16.363090 kubelet[3121]: I0706 23:56:16.363059 3121 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:56:16.363203 kubelet[3121]: E0706 23:56:16.363107 3121 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:56:16.369238 kubelet[3121]: I0706 23:56:16.369216 3121 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:56:16.369466 kubelet[3121]: I0706 23:56:16.369445 3121 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:56:16.373424 kubelet[3121]: E0706 23:56:16.373317 3121 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:56:16.376554 kubelet[3121]: I0706 23:56:16.376532 3121 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:56:16.448724 kubelet[3121]: I0706 23:56:16.448620 3121 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:56:16.448915 kubelet[3121]: I0706 23:56:16.448889 3121 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:56:16.449671 kubelet[3121]: I0706 23:56:16.448999 3121 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:56:16.452041 kubelet[3121]: I0706 23:56:16.450006 3121 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:56:16.452041 kubelet[3121]: I0706 23:56:16.450313 3121 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:56:16.452041 kubelet[3121]: I0706 23:56:16.450349 3121 policy_none.go:49] "None policy: Start" Jul 6 23:56:16.452041 kubelet[3121]: I0706 23:56:16.450363 3121 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:56:16.452041 kubelet[3121]: I0706 23:56:16.450378 3121 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:56:16.452041 kubelet[3121]: I0706 23:56:16.450569 3121 state_mem.go:75] "Updated machine memory state" Jul 6 23:56:16.463931 kubelet[3121]: E0706 23:56:16.463815 3121 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:56:16.464882 kubelet[3121]: I0706 23:56:16.464862 3121 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:56:16.465121 kubelet[3121]: I0706 23:56:16.465102 3121 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:56:16.465179 kubelet[3121]: I0706 23:56:16.465119 3121 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:56:16.466140 kubelet[3121]: I0706 23:56:16.465601 3121 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:56:16.471064 kubelet[3121]: E0706 23:56:16.470445 3121 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:56:16.582370 kubelet[3121]: I0706 23:56:16.581818 3121 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:16.591655 kubelet[3121]: I0706 23:56:16.591624 3121 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:16.591792 kubelet[3121]: I0706 23:56:16.591708 3121 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:16.664895 kubelet[3121]: I0706 23:56:16.664408 3121 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:16.664895 kubelet[3121]: I0706 23:56:16.664829 3121 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:16.664895 kubelet[3121]: I0706 23:56:16.664829 3121 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:16.675148 kubelet[3121]: W0706 23:56:16.675113 3121 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:56:16.680518 kubelet[3121]: W0706 23:56:16.680283 3121 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:56:16.680654 kubelet[3121]: W0706 23:56:16.680576 3121 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:56:16.759871 kubelet[3121]: I0706 23:56:16.759304 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3884e376619d98422b0d85dbbc639e0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-a-2f8c6d8615\" (UID: \"f3884e376619d98422b0d85dbbc639e0\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:16.759871 kubelet[3121]: I0706 23:56:16.759363 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f931ae0a263cbbdc711120f299dcb756-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-2f8c6d8615\" (UID: \"f931ae0a263cbbdc711120f299dcb756\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:16.759871 kubelet[3121]: I0706 23:56:16.759392 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f931ae0a263cbbdc711120f299dcb756-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-a-2f8c6d8615\" (UID: \"f931ae0a263cbbdc711120f299dcb756\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:16.759871 kubelet[3121]: I0706 23:56:16.759415 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f931ae0a263cbbdc711120f299dcb756-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-2f8c6d8615\" (UID: \"f931ae0a263cbbdc711120f299dcb756\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:16.759871 kubelet[3121]: I0706 23:56:16.759442 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f931ae0a263cbbdc711120f299dcb756-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-a-2f8c6d8615\" (UID: \"f931ae0a263cbbdc711120f299dcb756\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:16.760490 kubelet[3121]: I0706 23:56:16.759468 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe1b66999c13503d8db9904c686170cc-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-a-2f8c6d8615\" (UID: \"fe1b66999c13503d8db9904c686170cc\") " pod="kube-system/kube-scheduler-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:16.760490 kubelet[3121]: I0706 23:56:16.759492 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3884e376619d98422b0d85dbbc639e0-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-a-2f8c6d8615\" (UID: \"f3884e376619d98422b0d85dbbc639e0\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:16.760490 kubelet[3121]: I0706 23:56:16.759517 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f931ae0a263cbbdc711120f299dcb756-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-a-2f8c6d8615\" (UID: \"f931ae0a263cbbdc711120f299dcb756\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:16.760490 kubelet[3121]: I0706 23:56:16.759540 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3884e376619d98422b0d85dbbc639e0-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-a-2f8c6d8615\" (UID: \"f3884e376619d98422b0d85dbbc639e0\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:17.310952 kubelet[3121]: I0706 23:56:17.310894 3121 apiserver.go:52] "Watching apiserver" Jul 6 23:56:17.358640 kubelet[3121]: I0706 23:56:17.358576 3121 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:56:17.431185 kubelet[3121]: I0706 23:56:17.430667 3121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.4-a-2f8c6d8615" podStartSLOduration=1.4306467889999999 podStartE2EDuration="1.430646789s" podCreationTimestamp="2025-07-06 23:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:17.422636173 +0000 UTC m=+1.219859033" watchObservedRunningTime="2025-07-06 23:56:17.430646789 +0000 UTC m=+1.227869649" Jul 6 23:56:17.441558 kubelet[3121]: I0706 23:56:17.441437 3121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.4-a-2f8c6d8615" podStartSLOduration=1.441417079 podStartE2EDuration="1.441417079s" podCreationTimestamp="2025-07-06 23:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:17.431070001 +0000 UTC m=+1.228292961" watchObservedRunningTime="2025-07-06 23:56:17.441417079 +0000 UTC m=+1.238639939" Jul 6 23:56:17.452430 kubelet[3121]: I0706 23:56:17.451686 3121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2f8c6d8615" podStartSLOduration=1.451667556 podStartE2EDuration="1.451667556s" podCreationTimestamp="2025-07-06 23:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:17.441968394 +0000 UTC m=+1.239191254" watchObservedRunningTime="2025-07-06 23:56:17.451667556 +0000 UTC m=+1.248890516" Jul 6 23:56:20.939602 kubelet[3121]: I0706 23:56:20.939567 3121 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:56:20.940336 kubelet[3121]: I0706 23:56:20.940180 3121 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:56:20.940398 containerd[1675]: time="2025-07-06T23:56:20.939946702Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:56:21.813451 systemd[1]: Created slice kubepods-besteffort-pode021c80d_036a_4e7a_8967_a3ee568bc807.slice - libcontainer container kubepods-besteffort-pode021c80d_036a_4e7a_8967_a3ee568bc807.slice. Jul 6 23:56:21.889075 kubelet[3121]: I0706 23:56:21.889041 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e021c80d-036a-4e7a-8967-a3ee568bc807-xtables-lock\") pod \"kube-proxy-b9rmw\" (UID: \"e021c80d-036a-4e7a-8967-a3ee568bc807\") " pod="kube-system/kube-proxy-b9rmw" Jul 6 23:56:21.889075 kubelet[3121]: I0706 23:56:21.889079 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e021c80d-036a-4e7a-8967-a3ee568bc807-lib-modules\") pod \"kube-proxy-b9rmw\" (UID: \"e021c80d-036a-4e7a-8967-a3ee568bc807\") " pod="kube-system/kube-proxy-b9rmw" Jul 6 23:56:21.889075 kubelet[3121]: I0706 23:56:21.889107 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e021c80d-036a-4e7a-8967-a3ee568bc807-kube-proxy\") pod \"kube-proxy-b9rmw\" (UID: \"e021c80d-036a-4e7a-8967-a3ee568bc807\") " pod="kube-system/kube-proxy-b9rmw" Jul 6 23:56:21.889359 kubelet[3121]: I0706 23:56:21.889128 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lgp6\" (UniqueName: \"kubernetes.io/projected/e021c80d-036a-4e7a-8967-a3ee568bc807-kube-api-access-2lgp6\") pod \"kube-proxy-b9rmw\" (UID: \"e021c80d-036a-4e7a-8967-a3ee568bc807\") " pod="kube-system/kube-proxy-b9rmw" Jul 6 23:56:22.052265 systemd[1]: Created slice kubepods-besteffort-pode843a4ab_7514_47e8_9811_4edd91af8d97.slice - libcontainer container kubepods-besteffort-pode843a4ab_7514_47e8_9811_4edd91af8d97.slice. Jul 6 23:56:22.089994 kubelet[3121]: I0706 23:56:22.089844 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e843a4ab-7514-47e8-9811-4edd91af8d97-var-lib-calico\") pod \"tigera-operator-747864d56d-8x8sj\" (UID: \"e843a4ab-7514-47e8-9811-4edd91af8d97\") " pod="tigera-operator/tigera-operator-747864d56d-8x8sj" Jul 6 23:56:22.089994 kubelet[3121]: I0706 23:56:22.089902 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lk54\" (UniqueName: \"kubernetes.io/projected/e843a4ab-7514-47e8-9811-4edd91af8d97-kube-api-access-2lk54\") pod \"tigera-operator-747864d56d-8x8sj\" (UID: \"e843a4ab-7514-47e8-9811-4edd91af8d97\") " pod="tigera-operator/tigera-operator-747864d56d-8x8sj" Jul 6 23:56:22.123844 containerd[1675]: time="2025-07-06T23:56:22.123790619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b9rmw,Uid:e021c80d-036a-4e7a-8967-a3ee568bc807,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:22.166476 containerd[1675]: time="2025-07-06T23:56:22.166372667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:22.166476 containerd[1675]: time="2025-07-06T23:56:22.166417668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:22.166476 containerd[1675]: time="2025-07-06T23:56:22.166431369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:22.166901 containerd[1675]: time="2025-07-06T23:56:22.166513771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:22.192301 systemd[1]: Started cri-containerd-f7af750be7b0a6872db792ccd11498d4bf0007271fa48ef4e2c4fe68a10f866e.scope - libcontainer container f7af750be7b0a6872db792ccd11498d4bf0007271fa48ef4e2c4fe68a10f866e. Jul 6 23:56:22.222539 containerd[1675]: time="2025-07-06T23:56:22.222439779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b9rmw,Uid:e021c80d-036a-4e7a-8967-a3ee568bc807,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7af750be7b0a6872db792ccd11498d4bf0007271fa48ef4e2c4fe68a10f866e\"" Jul 6 23:56:22.226132 containerd[1675]: time="2025-07-06T23:56:22.226087977Z" level=info msg="CreateContainer within sandbox \"f7af750be7b0a6872db792ccd11498d4bf0007271fa48ef4e2c4fe68a10f866e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:56:22.262958 containerd[1675]: time="2025-07-06T23:56:22.262862168Z" level=info msg="CreateContainer within sandbox \"f7af750be7b0a6872db792ccd11498d4bf0007271fa48ef4e2c4fe68a10f866e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6a2e47827799d55ce1610cb5ccc6aea9a835924dd6ef95f27b76ee247d93a08d\"" Jul 6 23:56:22.263782 containerd[1675]: time="2025-07-06T23:56:22.263752592Z" level=info msg="StartContainer for \"6a2e47827799d55ce1610cb5ccc6aea9a835924dd6ef95f27b76ee247d93a08d\"" Jul 6 23:56:22.292200 systemd[1]: Started cri-containerd-6a2e47827799d55ce1610cb5ccc6aea9a835924dd6ef95f27b76ee247d93a08d.scope - libcontainer container 6a2e47827799d55ce1610cb5ccc6aea9a835924dd6ef95f27b76ee247d93a08d. Jul 6 23:56:22.323051 containerd[1675]: time="2025-07-06T23:56:22.322996190Z" level=info msg="StartContainer for \"6a2e47827799d55ce1610cb5ccc6aea9a835924dd6ef95f27b76ee247d93a08d\" returns successfully" Jul 6 23:56:22.355679 containerd[1675]: time="2025-07-06T23:56:22.355544867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-8x8sj,Uid:e843a4ab-7514-47e8-9811-4edd91af8d97,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:56:22.407853 containerd[1675]: time="2025-07-06T23:56:22.406901252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:22.407853 containerd[1675]: time="2025-07-06T23:56:22.406969154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:22.407853 containerd[1675]: time="2025-07-06T23:56:22.406988954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:22.407853 containerd[1675]: time="2025-07-06T23:56:22.407660772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:22.438271 kubelet[3121]: I0706 23:56:22.436979 3121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b9rmw" podStartSLOduration=1.436960662 podStartE2EDuration="1.436960662s" podCreationTimestamp="2025-07-06 23:56:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:22.436960562 +0000 UTC m=+6.234183422" watchObservedRunningTime="2025-07-06 23:56:22.436960662 +0000 UTC m=+6.234183522" Jul 6 23:56:22.440229 systemd[1]: Started cri-containerd-9a9af7fd389a3f8a3605c8f7e7099785051fad2727a0158367903c82e90d1191.scope - libcontainer container 9a9af7fd389a3f8a3605c8f7e7099785051fad2727a0158367903c82e90d1191. Jul 6 23:56:22.493294 containerd[1675]: time="2025-07-06T23:56:22.493248280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-8x8sj,Uid:e843a4ab-7514-47e8-9811-4edd91af8d97,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9a9af7fd389a3f8a3605c8f7e7099785051fad2727a0158367903c82e90d1191\"" Jul 6 23:56:22.495053 containerd[1675]: time="2025-07-06T23:56:22.494974526Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:56:24.032394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount910553105.mount: Deactivated successfully. Jul 6 23:56:24.697543 containerd[1675]: time="2025-07-06T23:56:24.697489874Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:24.700177 containerd[1675]: time="2025-07-06T23:56:24.700111543Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 6 23:56:24.703532 containerd[1675]: time="2025-07-06T23:56:24.703482831Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:24.709135 containerd[1675]: time="2025-07-06T23:56:24.709086478Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:24.709936 containerd[1675]: time="2025-07-06T23:56:24.709788196Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.214775769s" Jul 6 23:56:24.709936 containerd[1675]: time="2025-07-06T23:56:24.709827497Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 6 23:56:24.712109 containerd[1675]: time="2025-07-06T23:56:24.712080856Z" level=info msg="CreateContainer within sandbox \"9a9af7fd389a3f8a3605c8f7e7099785051fad2727a0158367903c82e90d1191\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:56:24.744171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount876214325.mount: Deactivated successfully. Jul 6 23:56:24.754766 containerd[1675]: time="2025-07-06T23:56:24.754726273Z" level=info msg="CreateContainer within sandbox \"9a9af7fd389a3f8a3605c8f7e7099785051fad2727a0158367903c82e90d1191\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c718f11fd87e5554969a797c0bc5a495a97f91c1e3d1b230bfddf3c581e37af2\"" Jul 6 23:56:24.756280 containerd[1675]: time="2025-07-06T23:56:24.755343789Z" level=info msg="StartContainer for \"c718f11fd87e5554969a797c0bc5a495a97f91c1e3d1b230bfddf3c581e37af2\"" Jul 6 23:56:24.790225 systemd[1]: Started cri-containerd-c718f11fd87e5554969a797c0bc5a495a97f91c1e3d1b230bfddf3c581e37af2.scope - libcontainer container c718f11fd87e5554969a797c0bc5a495a97f91c1e3d1b230bfddf3c581e37af2. Jul 6 23:56:24.817213 containerd[1675]: time="2025-07-06T23:56:24.817163008Z" level=info msg="StartContainer for \"c718f11fd87e5554969a797c0bc5a495a97f91c1e3d1b230bfddf3c581e37af2\" returns successfully" Jul 6 23:56:25.433132 kubelet[3121]: I0706 23:56:25.432966 3121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-8x8sj" podStartSLOduration=2.216353519 podStartE2EDuration="4.432947336s" podCreationTimestamp="2025-07-06 23:56:21 +0000 UTC" firstStartedPulling="2025-07-06 23:56:22.494202505 +0000 UTC m=+6.291425365" lastFinishedPulling="2025-07-06 23:56:24.710796322 +0000 UTC m=+8.508019182" observedRunningTime="2025-07-06 23:56:25.43272673 +0000 UTC m=+9.229949690" watchObservedRunningTime="2025-07-06 23:56:25.432947336 +0000 UTC m=+9.230170196" Jul 6 23:56:29.274968 sudo[2179]: pam_unix(sudo:session): session closed for user root Jul 6 23:56:29.377294 sshd[2176]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:29.381956 systemd[1]: sshd@6-10.200.8.12:22-10.200.16.10:52988.service: Deactivated successfully. Jul 6 23:56:29.387683 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:56:29.389080 systemd[1]: session-9.scope: Consumed 4.110s CPU time, 156.5M memory peak, 0B memory swap peak. Jul 6 23:56:29.391817 systemd-logind[1657]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:56:29.393164 systemd-logind[1657]: Removed session 9. Jul 6 23:56:33.768936 systemd[1]: Created slice kubepods-besteffort-pod0e5033dd_230f_4878_aca1_12ef1d98cf32.slice - libcontainer container kubepods-besteffort-pod0e5033dd_230f_4878_aca1_12ef1d98cf32.slice. Jul 6 23:56:33.863263 kubelet[3121]: I0706 23:56:33.863184 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0e5033dd-230f-4878-aca1-12ef1d98cf32-typha-certs\") pod \"calico-typha-d6fcc6455-hbwlm\" (UID: \"0e5033dd-230f-4878-aca1-12ef1d98cf32\") " pod="calico-system/calico-typha-d6fcc6455-hbwlm" Jul 6 23:56:33.863263 kubelet[3121]: I0706 23:56:33.863233 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e5033dd-230f-4878-aca1-12ef1d98cf32-tigera-ca-bundle\") pod \"calico-typha-d6fcc6455-hbwlm\" (UID: \"0e5033dd-230f-4878-aca1-12ef1d98cf32\") " pod="calico-system/calico-typha-d6fcc6455-hbwlm" Jul 6 23:56:33.863263 kubelet[3121]: I0706 23:56:33.863259 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfmrp\" (UniqueName: \"kubernetes.io/projected/0e5033dd-230f-4878-aca1-12ef1d98cf32-kube-api-access-mfmrp\") pod \"calico-typha-d6fcc6455-hbwlm\" (UID: \"0e5033dd-230f-4878-aca1-12ef1d98cf32\") " pod="calico-system/calico-typha-d6fcc6455-hbwlm" Jul 6 23:56:34.074855 containerd[1675]: time="2025-07-06T23:56:34.074812712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d6fcc6455-hbwlm,Uid:0e5033dd-230f-4878-aca1-12ef1d98cf32,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:34.135215 containerd[1675]: time="2025-07-06T23:56:34.135107323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:34.135720 containerd[1675]: time="2025-07-06T23:56:34.135656139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:34.135838 containerd[1675]: time="2025-07-06T23:56:34.135752341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:34.138100 containerd[1675]: time="2025-07-06T23:56:34.136792071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:34.165743 systemd[1]: Created slice kubepods-besteffort-pod7bfc1f91_59f7_4a7a_a43e_0e190a899c47.slice - libcontainer container kubepods-besteffort-pod7bfc1f91_59f7_4a7a_a43e_0e190a899c47.slice. Jul 6 23:56:34.197339 systemd[1]: Started cri-containerd-7114f9ee7ccac52bab82a1f9f445941132d1f737c83596b0d71fc2f9132e636f.scope - libcontainer container 7114f9ee7ccac52bab82a1f9f445941132d1f737c83596b0d71fc2f9132e636f. Jul 6 23:56:34.260487 containerd[1675]: time="2025-07-06T23:56:34.260437379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d6fcc6455-hbwlm,Uid:0e5033dd-230f-4878-aca1-12ef1d98cf32,Namespace:calico-system,Attempt:0,} returns sandbox id \"7114f9ee7ccac52bab82a1f9f445941132d1f737c83596b0d71fc2f9132e636f\"" Jul 6 23:56:34.263254 containerd[1675]: time="2025-07-06T23:56:34.263136055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 6 23:56:34.265470 kubelet[3121]: I0706 23:56:34.264930 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7bfc1f91-59f7-4a7a-a43e-0e190a899c47-var-lib-calico\") pod \"calico-node-86pg2\" (UID: \"7bfc1f91-59f7-4a7a-a43e-0e190a899c47\") " pod="calico-system/calico-node-86pg2" Jul 6 23:56:34.265470 kubelet[3121]: I0706 23:56:34.264995 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7bfc1f91-59f7-4a7a-a43e-0e190a899c47-policysync\") pod \"calico-node-86pg2\" (UID: \"7bfc1f91-59f7-4a7a-a43e-0e190a899c47\") " pod="calico-system/calico-node-86pg2" Jul 6 23:56:34.265470 kubelet[3121]: I0706 23:56:34.265072 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7bfc1f91-59f7-4a7a-a43e-0e190a899c47-cni-bin-dir\") pod \"calico-node-86pg2\" (UID: \"7bfc1f91-59f7-4a7a-a43e-0e190a899c47\") " pod="calico-system/calico-node-86pg2" Jul 6 23:56:34.265470 kubelet[3121]: I0706 23:56:34.265104 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7bfc1f91-59f7-4a7a-a43e-0e190a899c47-var-run-calico\") pod \"calico-node-86pg2\" (UID: \"7bfc1f91-59f7-4a7a-a43e-0e190a899c47\") " pod="calico-system/calico-node-86pg2" Jul 6 23:56:34.265470 kubelet[3121]: I0706 23:56:34.265135 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7bfc1f91-59f7-4a7a-a43e-0e190a899c47-cni-log-dir\") pod \"calico-node-86pg2\" (UID: \"7bfc1f91-59f7-4a7a-a43e-0e190a899c47\") " pod="calico-system/calico-node-86pg2" Jul 6 23:56:34.265814 kubelet[3121]: I0706 23:56:34.265155 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7bfc1f91-59f7-4a7a-a43e-0e190a899c47-cni-net-dir\") pod \"calico-node-86pg2\" (UID: \"7bfc1f91-59f7-4a7a-a43e-0e190a899c47\") " pod="calico-system/calico-node-86pg2" Jul 6 23:56:34.265814 kubelet[3121]: I0706 23:56:34.265176 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bfc1f91-59f7-4a7a-a43e-0e190a899c47-lib-modules\") pod \"calico-node-86pg2\" (UID: \"7bfc1f91-59f7-4a7a-a43e-0e190a899c47\") " pod="calico-system/calico-node-86pg2" Jul 6 23:56:34.265814 kubelet[3121]: I0706 23:56:34.265200 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bfc1f91-59f7-4a7a-a43e-0e190a899c47-xtables-lock\") pod \"calico-node-86pg2\" (UID: \"7bfc1f91-59f7-4a7a-a43e-0e190a899c47\") " pod="calico-system/calico-node-86pg2" Jul 6 23:56:34.265814 kubelet[3121]: I0706 23:56:34.265220 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw62c\" (UniqueName: \"kubernetes.io/projected/7bfc1f91-59f7-4a7a-a43e-0e190a899c47-kube-api-access-bw62c\") pod \"calico-node-86pg2\" (UID: \"7bfc1f91-59f7-4a7a-a43e-0e190a899c47\") " pod="calico-system/calico-node-86pg2" Jul 6 23:56:34.265814 kubelet[3121]: I0706 23:56:34.265249 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7bfc1f91-59f7-4a7a-a43e-0e190a899c47-node-certs\") pod \"calico-node-86pg2\" (UID: \"7bfc1f91-59f7-4a7a-a43e-0e190a899c47\") " pod="calico-system/calico-node-86pg2" Jul 6 23:56:34.266035 kubelet[3121]: I0706 23:56:34.265276 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bfc1f91-59f7-4a7a-a43e-0e190a899c47-tigera-ca-bundle\") pod \"calico-node-86pg2\" (UID: \"7bfc1f91-59f7-4a7a-a43e-0e190a899c47\") " pod="calico-system/calico-node-86pg2" Jul 6 23:56:34.266035 kubelet[3121]: I0706 23:56:34.265299 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7bfc1f91-59f7-4a7a-a43e-0e190a899c47-flexvol-driver-host\") pod \"calico-node-86pg2\" (UID: \"7bfc1f91-59f7-4a7a-a43e-0e190a899c47\") " pod="calico-system/calico-node-86pg2" Jul 6 23:56:34.371209 kubelet[3121]: E0706 23:56:34.370365 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.371209 kubelet[3121]: W0706 23:56:34.370390 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.371209 kubelet[3121]: E0706 23:56:34.370426 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.371209 kubelet[3121]: E0706 23:56:34.370886 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.371209 kubelet[3121]: W0706 23:56:34.370899 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.371209 kubelet[3121]: E0706 23:56:34.370931 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.373861 kubelet[3121]: E0706 23:56:34.373182 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.373861 kubelet[3121]: W0706 23:56:34.373201 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.373861 kubelet[3121]: E0706 23:56:34.373237 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.373861 kubelet[3121]: E0706 23:56:34.373475 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.373861 kubelet[3121]: W0706 23:56:34.373643 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.373861 kubelet[3121]: E0706 23:56:34.373694 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.374313 kubelet[3121]: E0706 23:56:34.374084 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.374558 kubelet[3121]: W0706 23:56:34.374097 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.374558 kubelet[3121]: E0706 23:56:34.374449 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.374838 kubelet[3121]: E0706 23:56:34.374780 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.374838 kubelet[3121]: W0706 23:56:34.374793 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.376054 kubelet[3121]: E0706 23:56:34.375958 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.376534 kubelet[3121]: E0706 23:56:34.376510 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.376635 kubelet[3121]: W0706 23:56:34.376621 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.376838 kubelet[3121]: E0706 23:56:34.376822 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.377147 kubelet[3121]: E0706 23:56:34.377133 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.377268 kubelet[3121]: W0706 23:56:34.377253 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.377477 kubelet[3121]: E0706 23:56:34.377447 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.377837 kubelet[3121]: E0706 23:56:34.377804 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.377837 kubelet[3121]: W0706 23:56:34.377819 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.378124 kubelet[3121]: E0706 23:56:34.378108 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.378489 kubelet[3121]: E0706 23:56:34.378417 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.378680 kubelet[3121]: W0706 23:56:34.378473 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.378770 kubelet[3121]: E0706 23:56:34.378739 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.379028 kubelet[3121]: E0706 23:56:34.378934 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.379028 kubelet[3121]: W0706 23:56:34.378948 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.379028 kubelet[3121]: E0706 23:56:34.378974 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.379323 kubelet[3121]: E0706 23:56:34.379161 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.379323 kubelet[3121]: W0706 23:56:34.379174 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.379323 kubelet[3121]: E0706 23:56:34.379208 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.379645 kubelet[3121]: E0706 23:56:34.379403 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.379645 kubelet[3121]: W0706 23:56:34.379413 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.379645 kubelet[3121]: E0706 23:56:34.379435 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.379645 kubelet[3121]: E0706 23:56:34.379599 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.379645 kubelet[3121]: W0706 23:56:34.379608 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.380149 kubelet[3121]: E0706 23:56:34.379731 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.380149 kubelet[3121]: E0706 23:56:34.379784 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.380149 kubelet[3121]: W0706 23:56:34.379793 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.380149 kubelet[3121]: E0706 23:56:34.379817 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.380149 kubelet[3121]: E0706 23:56:34.379979 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.380149 kubelet[3121]: W0706 23:56:34.379989 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.380562 kubelet[3121]: E0706 23:56:34.380012 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.380562 kubelet[3121]: E0706 23:56:34.380207 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.380562 kubelet[3121]: W0706 23:56:34.380217 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.380562 kubelet[3121]: E0706 23:56:34.380239 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.380562 kubelet[3121]: E0706 23:56:34.380402 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.380562 kubelet[3121]: W0706 23:56:34.380412 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.380817 kubelet[3121]: E0706 23:56:34.380611 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.380817 kubelet[3121]: W0706 23:56:34.380621 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.380817 kubelet[3121]: E0706 23:56:34.380634 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.380944 kubelet[3121]: E0706 23:56:34.380820 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.380944 kubelet[3121]: W0706 23:56:34.380830 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.380944 kubelet[3121]: E0706 23:56:34.380843 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.381539 kubelet[3121]: E0706 23:56:34.381056 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.381539 kubelet[3121]: W0706 23:56:34.381067 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.381539 kubelet[3121]: E0706 23:56:34.381069 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.381539 kubelet[3121]: E0706 23:56:34.381078 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.381539 kubelet[3121]: E0706 23:56:34.381370 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.381539 kubelet[3121]: W0706 23:56:34.381383 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.381539 kubelet[3121]: E0706 23:56:34.381397 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.453701 kubelet[3121]: E0706 23:56:34.453658 3121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ldbbw" podUID="80d646f2-c2b8-4ec5-90f1-97a890b8837a" Jul 6 23:56:34.470279 kubelet[3121]: E0706 23:56:34.470248 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.470279 kubelet[3121]: W0706 23:56:34.470270 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.470636 kubelet[3121]: E0706 23:56:34.470293 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.470636 kubelet[3121]: E0706 23:56:34.470632 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.470743 kubelet[3121]: W0706 23:56:34.470645 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.470743 kubelet[3121]: E0706 23:56:34.470661 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.471302 kubelet[3121]: E0706 23:56:34.471093 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.471302 kubelet[3121]: W0706 23:56:34.471110 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.471302 kubelet[3121]: E0706 23:56:34.471125 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.471631 kubelet[3121]: E0706 23:56:34.471550 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.471631 kubelet[3121]: W0706 23:56:34.471563 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.471631 kubelet[3121]: E0706 23:56:34.471577 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.471909 kubelet[3121]: E0706 23:56:34.471883 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.471909 kubelet[3121]: W0706 23:56:34.471903 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.472013 kubelet[3121]: E0706 23:56:34.471917 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.472177 kubelet[3121]: E0706 23:56:34.472157 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.472177 kubelet[3121]: W0706 23:56:34.472171 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.472276 kubelet[3121]: E0706 23:56:34.472184 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.472432 kubelet[3121]: E0706 23:56:34.472413 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.472489 kubelet[3121]: W0706 23:56:34.472439 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.472489 kubelet[3121]: E0706 23:56:34.472453 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.472689 kubelet[3121]: E0706 23:56:34.472672 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.472689 kubelet[3121]: W0706 23:56:34.472685 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.472787 kubelet[3121]: E0706 23:56:34.472706 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.472956 kubelet[3121]: E0706 23:56:34.472941 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.472956 kubelet[3121]: W0706 23:56:34.472954 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.473139 kubelet[3121]: E0706 23:56:34.472968 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.473245 kubelet[3121]: E0706 23:56:34.473187 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.473245 kubelet[3121]: W0706 23:56:34.473198 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.473245 kubelet[3121]: E0706 23:56:34.473212 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.473435 kubelet[3121]: E0706 23:56:34.473403 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.473435 kubelet[3121]: W0706 23:56:34.473414 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.473552 kubelet[3121]: E0706 23:56:34.473427 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.473825 kubelet[3121]: E0706 23:56:34.473703 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.473825 kubelet[3121]: W0706 23:56:34.473786 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.473825 kubelet[3121]: E0706 23:56:34.473804 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.474863 kubelet[3121]: E0706 23:56:34.474706 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.474863 kubelet[3121]: W0706 23:56:34.474733 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.474863 kubelet[3121]: E0706 23:56:34.474750 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.476616 containerd[1675]: time="2025-07-06T23:56:34.476191400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-86pg2,Uid:7bfc1f91-59f7-4a7a-a43e-0e190a899c47,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:34.476798 kubelet[3121]: E0706 23:56:34.476576 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.476798 kubelet[3121]: W0706 23:56:34.476700 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.476798 kubelet[3121]: E0706 23:56:34.476720 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.477595 kubelet[3121]: E0706 23:56:34.477338 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.477824 kubelet[3121]: W0706 23:56:34.477354 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.477824 kubelet[3121]: E0706 23:56:34.477651 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.479051 kubelet[3121]: E0706 23:56:34.478778 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.479051 kubelet[3121]: W0706 23:56:34.478794 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.479051 kubelet[3121]: E0706 23:56:34.478897 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.479499 kubelet[3121]: E0706 23:56:34.479482 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.479499 kubelet[3121]: W0706 23:56:34.479498 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.479612 kubelet[3121]: E0706 23:56:34.479513 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.480650 kubelet[3121]: E0706 23:56:34.480211 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.480650 kubelet[3121]: W0706 23:56:34.480226 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.480650 kubelet[3121]: E0706 23:56:34.480240 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.481108 kubelet[3121]: E0706 23:56:34.480966 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.481108 kubelet[3121]: W0706 23:56:34.480981 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.481108 kubelet[3121]: E0706 23:56:34.480994 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.481528 kubelet[3121]: E0706 23:56:34.481430 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.481528 kubelet[3121]: W0706 23:56:34.481446 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.481528 kubelet[3121]: E0706 23:56:34.481460 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.521613 containerd[1675]: time="2025-07-06T23:56:34.521339581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:34.521613 containerd[1675]: time="2025-07-06T23:56:34.521414884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:34.521613 containerd[1675]: time="2025-07-06T23:56:34.521439884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:34.521613 containerd[1675]: time="2025-07-06T23:56:34.521524787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:34.543475 systemd[1]: Started cri-containerd-4691df4ff6e20640e82b71d164cede3087a2e3a39357f5feab132804abdd1d31.scope - libcontainer container 4691df4ff6e20640e82b71d164cede3087a2e3a39357f5feab132804abdd1d31. Jul 6 23:56:34.569628 kubelet[3121]: E0706 23:56:34.569377 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.569628 kubelet[3121]: W0706 23:56:34.569401 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.569628 kubelet[3121]: E0706 23:56:34.569428 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.569628 kubelet[3121]: I0706 23:56:34.569467 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/80d646f2-c2b8-4ec5-90f1-97a890b8837a-registration-dir\") pod \"csi-node-driver-ldbbw\" (UID: \"80d646f2-c2b8-4ec5-90f1-97a890b8837a\") " pod="calico-system/csi-node-driver-ldbbw" Jul 6 23:56:34.570620 kubelet[3121]: E0706 23:56:34.569976 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.570620 kubelet[3121]: W0706 23:56:34.569992 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.570620 kubelet[3121]: E0706 23:56:34.570058 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.570620 kubelet[3121]: I0706 23:56:34.570088 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80d646f2-c2b8-4ec5-90f1-97a890b8837a-kubelet-dir\") pod \"csi-node-driver-ldbbw\" (UID: \"80d646f2-c2b8-4ec5-90f1-97a890b8837a\") " pod="calico-system/csi-node-driver-ldbbw" Jul 6 23:56:34.570620 kubelet[3121]: E0706 23:56:34.570401 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.570620 kubelet[3121]: W0706 23:56:34.570435 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.570620 kubelet[3121]: E0706 23:56:34.570460 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.570620 kubelet[3121]: I0706 23:56:34.570483 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/80d646f2-c2b8-4ec5-90f1-97a890b8837a-varrun\") pod \"csi-node-driver-ldbbw\" (UID: \"80d646f2-c2b8-4ec5-90f1-97a890b8837a\") " pod="calico-system/csi-node-driver-ldbbw" Jul 6 23:56:34.571337 kubelet[3121]: E0706 23:56:34.571119 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.571337 kubelet[3121]: W0706 23:56:34.571134 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.571337 kubelet[3121]: E0706 23:56:34.571256 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.571337 kubelet[3121]: I0706 23:56:34.571293 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpxk6\" (UniqueName: \"kubernetes.io/projected/80d646f2-c2b8-4ec5-90f1-97a890b8837a-kube-api-access-tpxk6\") pod \"csi-node-driver-ldbbw\" (UID: \"80d646f2-c2b8-4ec5-90f1-97a890b8837a\") " pod="calico-system/csi-node-driver-ldbbw" Jul 6 23:56:34.572130 kubelet[3121]: E0706 23:56:34.571581 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.572130 kubelet[3121]: W0706 23:56:34.571593 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.572130 kubelet[3121]: E0706 23:56:34.572037 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.572447 kubelet[3121]: E0706 23:56:34.572334 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.572447 kubelet[3121]: W0706 23:56:34.572345 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.572447 kubelet[3121]: E0706 23:56:34.572426 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.572890 kubelet[3121]: E0706 23:56:34.572757 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.572890 kubelet[3121]: W0706 23:56:34.572770 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.572890 kubelet[3121]: E0706 23:56:34.572850 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.573232 kubelet[3121]: E0706 23:56:34.573076 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.573232 kubelet[3121]: W0706 23:56:34.573086 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.573565 kubelet[3121]: E0706 23:56:34.573334 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.573565 kubelet[3121]: I0706 23:56:34.573360 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/80d646f2-c2b8-4ec5-90f1-97a890b8837a-socket-dir\") pod \"csi-node-driver-ldbbw\" (UID: \"80d646f2-c2b8-4ec5-90f1-97a890b8837a\") " pod="calico-system/csi-node-driver-ldbbw" Jul 6 23:56:34.573565 kubelet[3121]: E0706 23:56:34.573414 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.573565 kubelet[3121]: W0706 23:56:34.573422 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.573956 kubelet[3121]: E0706 23:56:34.573764 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.573956 kubelet[3121]: E0706 23:56:34.573821 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.573956 kubelet[3121]: W0706 23:56:34.573830 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.573956 kubelet[3121]: E0706 23:56:34.573841 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.575467 kubelet[3121]: E0706 23:56:34.575321 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.575467 kubelet[3121]: W0706 23:56:34.575342 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.575467 kubelet[3121]: E0706 23:56:34.575366 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.575667 kubelet[3121]: E0706 23:56:34.575625 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.575667 kubelet[3121]: W0706 23:56:34.575638 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.575667 kubelet[3121]: E0706 23:56:34.575652 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.576100 kubelet[3121]: E0706 23:56:34.575977 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.576100 kubelet[3121]: W0706 23:56:34.575992 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.576100 kubelet[3121]: E0706 23:56:34.576005 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.576417 kubelet[3121]: E0706 23:56:34.576369 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.576417 kubelet[3121]: W0706 23:56:34.576381 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.576417 kubelet[3121]: E0706 23:56:34.576397 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.577114 kubelet[3121]: E0706 23:56:34.576991 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.577114 kubelet[3121]: W0706 23:56:34.577005 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.577114 kubelet[3121]: E0706 23:56:34.577056 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.578492 containerd[1675]: time="2025-07-06T23:56:34.577841084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-86pg2,Uid:7bfc1f91-59f7-4a7a-a43e-0e190a899c47,Namespace:calico-system,Attempt:0,} returns sandbox id \"4691df4ff6e20640e82b71d164cede3087a2e3a39357f5feab132804abdd1d31\"" Jul 6 23:56:34.678049 kubelet[3121]: E0706 23:56:34.677918 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.678049 kubelet[3121]: W0706 23:56:34.677970 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.678049 kubelet[3121]: E0706 23:56:34.678000 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.679206 kubelet[3121]: E0706 23:56:34.678779 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.679206 kubelet[3121]: W0706 23:56:34.678813 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.679206 kubelet[3121]: E0706 23:56:34.678836 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.679867 kubelet[3121]: E0706 23:56:34.679853 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.680115 kubelet[3121]: W0706 23:56:34.679924 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.680374 kubelet[3121]: E0706 23:56:34.680236 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.680523 kubelet[3121]: E0706 23:56:34.680505 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.680705 kubelet[3121]: W0706 23:56:34.680524 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.680705 kubelet[3121]: E0706 23:56:34.680595 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.681069 kubelet[3121]: E0706 23:56:34.680932 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.681069 kubelet[3121]: W0706 23:56:34.680945 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.681069 kubelet[3121]: E0706 23:56:34.680965 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.681629 kubelet[3121]: E0706 23:56:34.681539 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.681629 kubelet[3121]: W0706 23:56:34.681562 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.681629 kubelet[3121]: E0706 23:56:34.681646 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.682396 kubelet[3121]: E0706 23:56:34.682382 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.682471 kubelet[3121]: W0706 23:56:34.682459 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.682711 kubelet[3121]: E0706 23:56:34.682598 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.682991 kubelet[3121]: E0706 23:56:34.682897 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.682991 kubelet[3121]: W0706 23:56:34.682911 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.684279 kubelet[3121]: E0706 23:56:34.684235 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.684637 kubelet[3121]: E0706 23:56:34.684538 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.684637 kubelet[3121]: W0706 23:56:34.684553 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.684878 kubelet[3121]: E0706 23:56:34.684775 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.685007 kubelet[3121]: E0706 23:56:34.684979 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.685007 kubelet[3121]: W0706 23:56:34.684991 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.685438 kubelet[3121]: E0706 23:56:34.685273 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.685658 kubelet[3121]: E0706 23:56:34.685545 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.685658 kubelet[3121]: W0706 23:56:34.685559 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.685926 kubelet[3121]: E0706 23:56:34.685782 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.686178 kubelet[3121]: E0706 23:56:34.686079 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.686178 kubelet[3121]: W0706 23:56:34.686094 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.686410 kubelet[3121]: E0706 23:56:34.686302 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.686612 kubelet[3121]: E0706 23:56:34.686522 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.686612 kubelet[3121]: W0706 23:56:34.686535 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.686889 kubelet[3121]: E0706 23:56:34.686782 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.687141 kubelet[3121]: E0706 23:56:34.686993 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.687141 kubelet[3121]: W0706 23:56:34.687006 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.687445 kubelet[3121]: E0706 23:56:34.687302 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.687647 kubelet[3121]: E0706 23:56:34.687635 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.687810 kubelet[3121]: W0706 23:56:34.687722 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.688000 kubelet[3121]: E0706 23:56:34.687884 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.688234 kubelet[3121]: E0706 23:56:34.688165 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.688234 kubelet[3121]: W0706 23:56:34.688178 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.688505 kubelet[3121]: E0706 23:56:34.688358 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.688647 kubelet[3121]: E0706 23:56:34.688611 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.688647 kubelet[3121]: W0706 23:56:34.688624 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.688955 kubelet[3121]: E0706 23:56:34.688854 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.689216 kubelet[3121]: E0706 23:56:34.689086 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.689216 kubelet[3121]: W0706 23:56:34.689099 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.689377 kubelet[3121]: E0706 23:56:34.689336 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.689653 kubelet[3121]: E0706 23:56:34.689562 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.689653 kubelet[3121]: W0706 23:56:34.689575 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.689888 kubelet[3121]: E0706 23:56:34.689782 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.689971 kubelet[3121]: E0706 23:56:34.689951 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.690137 kubelet[3121]: W0706 23:56:34.689972 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.690137 kubelet[3121]: E0706 23:56:34.689997 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.690232 kubelet[3121]: E0706 23:56:34.690218 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.690332 kubelet[3121]: W0706 23:56:34.690234 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.690332 kubelet[3121]: E0706 23:56:34.690261 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.690499 kubelet[3121]: E0706 23:56:34.690483 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.690499 kubelet[3121]: W0706 23:56:34.690496 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.690638 kubelet[3121]: E0706 23:56:34.690514 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.690803 kubelet[3121]: E0706 23:56:34.690717 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.690803 kubelet[3121]: W0706 23:56:34.690727 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.690803 kubelet[3121]: E0706 23:56:34.690744 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.691033 kubelet[3121]: E0706 23:56:34.691004 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.691090 kubelet[3121]: W0706 23:56:34.691064 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.691090 kubelet[3121]: E0706 23:56:34.691086 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.691532 kubelet[3121]: E0706 23:56:34.691479 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.691532 kubelet[3121]: W0706 23:56:34.691493 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.691532 kubelet[3121]: E0706 23:56:34.691509 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:34.698884 kubelet[3121]: E0706 23:56:34.698865 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:34.698884 kubelet[3121]: W0706 23:56:34.698880 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:34.698988 kubelet[3121]: E0706 23:56:34.698896 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:35.465118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount862593819.mount: Deactivated successfully. Jul 6 23:56:36.365076 kubelet[3121]: E0706 23:56:36.364521 3121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ldbbw" podUID="80d646f2-c2b8-4ec5-90f1-97a890b8837a" Jul 6 23:56:36.432663 containerd[1675]: time="2025-07-06T23:56:36.432608109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:36.436390 containerd[1675]: time="2025-07-06T23:56:36.436334015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 6 23:56:36.442851 containerd[1675]: time="2025-07-06T23:56:36.441562763Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:36.445827 containerd[1675]: time="2025-07-06T23:56:36.445730982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:36.447106 containerd[1675]: time="2025-07-06T23:56:36.446949516Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.183737959s" Jul 6 23:56:36.447106 containerd[1675]: time="2025-07-06T23:56:36.446988617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 6 23:56:36.449876 containerd[1675]: time="2025-07-06T23:56:36.449538290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 6 23:56:36.469417 containerd[1675]: time="2025-07-06T23:56:36.469279450Z" level=info msg="CreateContainer within sandbox \"7114f9ee7ccac52bab82a1f9f445941132d1f737c83596b0d71fc2f9132e636f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 6 23:56:36.513364 containerd[1675]: time="2025-07-06T23:56:36.513313199Z" level=info msg="CreateContainer within sandbox \"7114f9ee7ccac52bab82a1f9f445941132d1f737c83596b0d71fc2f9132e636f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f7d72b38e3882cbb995ddcaa16f09e789e45a6a20ba592119d578d020417d5cb\"" Jul 6 23:56:36.514016 containerd[1675]: time="2025-07-06T23:56:36.513961417Z" level=info msg="StartContainer for \"f7d72b38e3882cbb995ddcaa16f09e789e45a6a20ba592119d578d020417d5cb\"" Jul 6 23:56:36.559220 systemd[1]: Started cri-containerd-f7d72b38e3882cbb995ddcaa16f09e789e45a6a20ba592119d578d020417d5cb.scope - libcontainer container f7d72b38e3882cbb995ddcaa16f09e789e45a6a20ba592119d578d020417d5cb. Jul 6 23:56:36.610631 containerd[1675]: time="2025-07-06T23:56:36.610572158Z" level=info msg="StartContainer for \"f7d72b38e3882cbb995ddcaa16f09e789e45a6a20ba592119d578d020417d5cb\" returns successfully" Jul 6 23:56:37.486340 kubelet[3121]: I0706 23:56:37.485608 3121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-d6fcc6455-hbwlm" podStartSLOduration=2.299199151 podStartE2EDuration="4.485589785s" podCreationTimestamp="2025-07-06 23:56:33 +0000 UTC" firstStartedPulling="2025-07-06 23:56:34.262390134 +0000 UTC m=+18.059612994" lastFinishedPulling="2025-07-06 23:56:36.448780768 +0000 UTC m=+20.246003628" observedRunningTime="2025-07-06 23:56:37.485274876 +0000 UTC m=+21.282497836" watchObservedRunningTime="2025-07-06 23:56:37.485589785 +0000 UTC m=+21.282812745" Jul 6 23:56:37.502677 kubelet[3121]: E0706 23:56:37.502640 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.502677 kubelet[3121]: W0706 23:56:37.502661 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.502947 kubelet[3121]: E0706 23:56:37.502685 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.502947 kubelet[3121]: E0706 23:56:37.502942 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.503129 kubelet[3121]: W0706 23:56:37.502955 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.503129 kubelet[3121]: E0706 23:56:37.502969 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.503268 kubelet[3121]: E0706 23:56:37.503189 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.503268 kubelet[3121]: W0706 23:56:37.503200 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.503268 kubelet[3121]: E0706 23:56:37.503214 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.503542 kubelet[3121]: E0706 23:56:37.503461 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.503542 kubelet[3121]: W0706 23:56:37.503475 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.503542 kubelet[3121]: E0706 23:56:37.503488 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.504205 kubelet[3121]: E0706 23:56:37.503715 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.504205 kubelet[3121]: W0706 23:56:37.503725 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.504205 kubelet[3121]: E0706 23:56:37.503737 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.504205 kubelet[3121]: E0706 23:56:37.503935 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.504205 kubelet[3121]: W0706 23:56:37.503946 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.504205 kubelet[3121]: E0706 23:56:37.503959 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.505241 kubelet[3121]: E0706 23:56:37.504318 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.505241 kubelet[3121]: W0706 23:56:37.504329 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.505241 kubelet[3121]: E0706 23:56:37.504343 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.505241 kubelet[3121]: E0706 23:56:37.504558 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.505241 kubelet[3121]: W0706 23:56:37.504571 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.505241 kubelet[3121]: E0706 23:56:37.504584 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.505241 kubelet[3121]: E0706 23:56:37.504814 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.505241 kubelet[3121]: W0706 23:56:37.504825 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.505241 kubelet[3121]: E0706 23:56:37.504838 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.505241 kubelet[3121]: E0706 23:56:37.505094 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.507066 kubelet[3121]: W0706 23:56:37.505105 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.507066 kubelet[3121]: E0706 23:56:37.505119 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.507066 kubelet[3121]: E0706 23:56:37.505553 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.507066 kubelet[3121]: W0706 23:56:37.505564 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.507066 kubelet[3121]: E0706 23:56:37.505579 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.507066 kubelet[3121]: E0706 23:56:37.505779 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.507066 kubelet[3121]: W0706 23:56:37.505790 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.507066 kubelet[3121]: E0706 23:56:37.505801 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.507066 kubelet[3121]: E0706 23:56:37.506174 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.507066 kubelet[3121]: W0706 23:56:37.506186 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.508425 kubelet[3121]: E0706 23:56:37.506200 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.508425 kubelet[3121]: E0706 23:56:37.506424 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.508425 kubelet[3121]: W0706 23:56:37.506436 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.508425 kubelet[3121]: E0706 23:56:37.506449 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.508425 kubelet[3121]: E0706 23:56:37.506657 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.508425 kubelet[3121]: W0706 23:56:37.506668 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.508425 kubelet[3121]: E0706 23:56:37.506683 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.508425 kubelet[3121]: E0706 23:56:37.507047 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.508425 kubelet[3121]: W0706 23:56:37.507060 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.508425 kubelet[3121]: E0706 23:56:37.507074 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.508905 kubelet[3121]: E0706 23:56:37.507365 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.508905 kubelet[3121]: W0706 23:56:37.507377 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.508905 kubelet[3121]: E0706 23:56:37.507402 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.508905 kubelet[3121]: E0706 23:56:37.507731 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.508905 kubelet[3121]: W0706 23:56:37.507746 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.508905 kubelet[3121]: E0706 23:56:37.507764 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.508905 kubelet[3121]: E0706 23:56:37.508153 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.508905 kubelet[3121]: W0706 23:56:37.508166 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.508905 kubelet[3121]: E0706 23:56:37.508189 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.508905 kubelet[3121]: E0706 23:56:37.508420 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.509336 kubelet[3121]: W0706 23:56:37.508432 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.509336 kubelet[3121]: E0706 23:56:37.508457 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.509336 kubelet[3121]: E0706 23:56:37.508817 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.509336 kubelet[3121]: W0706 23:56:37.508828 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.509336 kubelet[3121]: E0706 23:56:37.508846 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.509336 kubelet[3121]: E0706 23:56:37.509081 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.509336 kubelet[3121]: W0706 23:56:37.509092 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.509336 kubelet[3121]: E0706 23:56:37.509192 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.509336 kubelet[3121]: E0706 23:56:37.509312 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.509336 kubelet[3121]: W0706 23:56:37.509321 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.509829 kubelet[3121]: E0706 23:56:37.509414 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.509829 kubelet[3121]: E0706 23:56:37.509626 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.509829 kubelet[3121]: W0706 23:56:37.509637 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.509829 kubelet[3121]: E0706 23:56:37.509717 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.510012 kubelet[3121]: E0706 23:56:37.509854 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.510012 kubelet[3121]: W0706 23:56:37.509863 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.510012 kubelet[3121]: E0706 23:56:37.509878 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.510194 kubelet[3121]: E0706 23:56:37.510173 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.510194 kubelet[3121]: W0706 23:56:37.510188 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.510360 kubelet[3121]: E0706 23:56:37.510214 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.510699 kubelet[3121]: E0706 23:56:37.510661 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.510699 kubelet[3121]: W0706 23:56:37.510677 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.510699 kubelet[3121]: E0706 23:56:37.510696 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.510975 kubelet[3121]: E0706 23:56:37.510898 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.510975 kubelet[3121]: W0706 23:56:37.510911 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.510975 kubelet[3121]: E0706 23:56:37.510938 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.511178 kubelet[3121]: E0706 23:56:37.511166 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.511178 kubelet[3121]: W0706 23:56:37.511176 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.511504 kubelet[3121]: E0706 23:56:37.511291 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.511504 kubelet[3121]: E0706 23:56:37.511355 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.511504 kubelet[3121]: W0706 23:56:37.511364 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.511504 kubelet[3121]: E0706 23:56:37.511375 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.511728 kubelet[3121]: E0706 23:56:37.511656 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.511728 kubelet[3121]: W0706 23:56:37.511666 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.511728 kubelet[3121]: E0706 23:56:37.511688 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.512078 kubelet[3121]: E0706 23:56:37.512016 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.512078 kubelet[3121]: W0706 23:56:37.512074 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.512194 kubelet[3121]: E0706 23:56:37.512095 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.512321 kubelet[3121]: E0706 23:56:37.512305 3121 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:37.512321 kubelet[3121]: W0706 23:56:37.512318 3121 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:37.512400 kubelet[3121]: E0706 23:56:37.512332 3121 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:37.703158 containerd[1675]: time="2025-07-06T23:56:37.703107157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:37.706115 containerd[1675]: time="2025-07-06T23:56:37.706064941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 6 23:56:37.712179 containerd[1675]: time="2025-07-06T23:56:37.712107212Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:37.716465 containerd[1675]: time="2025-07-06T23:56:37.716405334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:37.717142 containerd[1675]: time="2025-07-06T23:56:37.717108354Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.267290956s" Jul 6 23:56:37.717299 containerd[1675]: time="2025-07-06T23:56:37.717147255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 6 23:56:37.720057 containerd[1675]: time="2025-07-06T23:56:37.719972135Z" level=info msg="CreateContainer within sandbox \"4691df4ff6e20640e82b71d164cede3087a2e3a39357f5feab132804abdd1d31\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 6 23:56:37.758132 containerd[1675]: time="2025-07-06T23:56:37.757985114Z" level=info msg="CreateContainer within sandbox \"4691df4ff6e20640e82b71d164cede3087a2e3a39357f5feab132804abdd1d31\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1a26d18c87d4bd7001da4f9c067c4c867adf084f5fd2f141a869f2c15c10a08a\"" Jul 6 23:56:37.761385 containerd[1675]: time="2025-07-06T23:56:37.759842566Z" level=info msg="StartContainer for \"1a26d18c87d4bd7001da4f9c067c4c867adf084f5fd2f141a869f2c15c10a08a\"" Jul 6 23:56:37.792470 systemd[1]: run-containerd-runc-k8s.io-1a26d18c87d4bd7001da4f9c067c4c867adf084f5fd2f141a869f2c15c10a08a-runc.1s20Vo.mount: Deactivated successfully. Jul 6 23:56:37.798195 systemd[1]: Started cri-containerd-1a26d18c87d4bd7001da4f9c067c4c867adf084f5fd2f141a869f2c15c10a08a.scope - libcontainer container 1a26d18c87d4bd7001da4f9c067c4c867adf084f5fd2f141a869f2c15c10a08a. Jul 6 23:56:37.830596 containerd[1675]: time="2025-07-06T23:56:37.830537772Z" level=info msg="StartContainer for \"1a26d18c87d4bd7001da4f9c067c4c867adf084f5fd2f141a869f2c15c10a08a\" returns successfully" Jul 6 23:56:37.843206 systemd[1]: cri-containerd-1a26d18c87d4bd7001da4f9c067c4c867adf084f5fd2f141a869f2c15c10a08a.scope: Deactivated successfully. Jul 6 23:56:37.867796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a26d18c87d4bd7001da4f9c067c4c867adf084f5fd2f141a869f2c15c10a08a-rootfs.mount: Deactivated successfully. Jul 6 23:56:38.365874 kubelet[3121]: E0706 23:56:38.364008 3121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ldbbw" podUID="80d646f2-c2b8-4ec5-90f1-97a890b8837a" Jul 6 23:56:38.469387 kubelet[3121]: I0706 23:56:38.469050 3121 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:56:39.350762 containerd[1675]: time="2025-07-06T23:56:39.350644802Z" level=info msg="shim disconnected" id=1a26d18c87d4bd7001da4f9c067c4c867adf084f5fd2f141a869f2c15c10a08a namespace=k8s.io Jul 6 23:56:39.350762 containerd[1675]: time="2025-07-06T23:56:39.350729904Z" level=warning msg="cleaning up after shim disconnected" id=1a26d18c87d4bd7001da4f9c067c4c867adf084f5fd2f141a869f2c15c10a08a namespace=k8s.io Jul 6 23:56:39.350762 containerd[1675]: time="2025-07-06T23:56:39.350746105Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:56:39.473270 containerd[1675]: time="2025-07-06T23:56:39.473218179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 6 23:56:40.364845 kubelet[3121]: E0706 23:56:40.363456 3121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ldbbw" podUID="80d646f2-c2b8-4ec5-90f1-97a890b8837a" Jul 6 23:56:42.365859 kubelet[3121]: E0706 23:56:42.365451 3121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ldbbw" podUID="80d646f2-c2b8-4ec5-90f1-97a890b8837a" Jul 6 23:56:42.530414 containerd[1675]: time="2025-07-06T23:56:42.530359066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:42.537950 containerd[1675]: time="2025-07-06T23:56:42.537888359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 6 23:56:42.542181 containerd[1675]: time="2025-07-06T23:56:42.542108468Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:42.545816 containerd[1675]: time="2025-07-06T23:56:42.545759661Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:42.546575 containerd[1675]: time="2025-07-06T23:56:42.546398478Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.073126196s" Jul 6 23:56:42.546575 containerd[1675]: time="2025-07-06T23:56:42.546441579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 6 23:56:42.549239 containerd[1675]: time="2025-07-06T23:56:42.548957743Z" level=info msg="CreateContainer within sandbox \"4691df4ff6e20640e82b71d164cede3087a2e3a39357f5feab132804abdd1d31\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:56:42.588256 containerd[1675]: time="2025-07-06T23:56:42.588213152Z" level=info msg="CreateContainer within sandbox \"4691df4ff6e20640e82b71d164cede3087a2e3a39357f5feab132804abdd1d31\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0d70f78818d8c848a9b722c2e564e4b8980010ab7999acd93ab106956de8aa5f\"" Jul 6 23:56:42.589438 containerd[1675]: time="2025-07-06T23:56:42.588699064Z" level=info msg="StartContainer for \"0d70f78818d8c848a9b722c2e564e4b8980010ab7999acd93ab106956de8aa5f\"" Jul 6 23:56:42.626198 systemd[1]: Started cri-containerd-0d70f78818d8c848a9b722c2e564e4b8980010ab7999acd93ab106956de8aa5f.scope - libcontainer container 0d70f78818d8c848a9b722c2e564e4b8980010ab7999acd93ab106956de8aa5f. Jul 6 23:56:42.655322 containerd[1675]: time="2025-07-06T23:56:42.655257674Z" level=info msg="StartContainer for \"0d70f78818d8c848a9b722c2e564e4b8980010ab7999acd93ab106956de8aa5f\" returns successfully" Jul 6 23:56:44.276659 containerd[1675]: time="2025-07-06T23:56:44.276580416Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:56:44.278910 systemd[1]: cri-containerd-0d70f78818d8c848a9b722c2e564e4b8980010ab7999acd93ab106956de8aa5f.scope: Deactivated successfully. Jul 6 23:56:44.300641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d70f78818d8c848a9b722c2e564e4b8980010ab7999acd93ab106956de8aa5f-rootfs.mount: Deactivated successfully. Jul 6 23:56:44.336957 kubelet[3121]: I0706 23:56:44.335726 3121 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:56:44.899165 kubelet[3121]: I0706 23:56:44.456986 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dc25359-c7f1-4176-9811-c8a3b8856ebe-whisker-ca-bundle\") pod \"whisker-5497dd78ff-5pz86\" (UID: \"6dc25359-c7f1-4176-9811-c8a3b8856ebe\") " pod="calico-system/whisker-5497dd78ff-5pz86" Jul 6 23:56:44.899165 kubelet[3121]: I0706 23:56:44.457099 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6dc25359-c7f1-4176-9811-c8a3b8856ebe-whisker-backend-key-pair\") pod \"whisker-5497dd78ff-5pz86\" (UID: \"6dc25359-c7f1-4176-9811-c8a3b8856ebe\") " pod="calico-system/whisker-5497dd78ff-5pz86" Jul 6 23:56:44.899165 kubelet[3121]: I0706 23:56:44.457257 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t4h7\" (UniqueName: \"kubernetes.io/projected/6dc25359-c7f1-4176-9811-c8a3b8856ebe-kube-api-access-7t4h7\") pod \"whisker-5497dd78ff-5pz86\" (UID: \"6dc25359-c7f1-4176-9811-c8a3b8856ebe\") " pod="calico-system/whisker-5497dd78ff-5pz86" Jul 6 23:56:44.899165 kubelet[3121]: I0706 23:56:44.558164 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f8547f6-c7c2-4c77-af76-00fb7e939448-config-volume\") pod \"coredns-668d6bf9bc-bz9rt\" (UID: \"8f8547f6-c7c2-4c77-af76-00fb7e939448\") " pod="kube-system/coredns-668d6bf9bc-bz9rt" Jul 6 23:56:44.899165 kubelet[3121]: I0706 23:56:44.558235 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6pb9\" (UniqueName: \"kubernetes.io/projected/f7ba973d-dc0d-426a-8adc-f92cde7b6fed-kube-api-access-k6pb9\") pod \"goldmane-768f4c5c69-8nwl8\" (UID: \"f7ba973d-dc0d-426a-8adc-f92cde7b6fed\") " pod="calico-system/goldmane-768f4c5c69-8nwl8" Jul 6 23:56:44.375456 systemd[1]: Created slice kubepods-besteffort-pod80d646f2_c2b8_4ec5_90f1_97a890b8837a.slice - libcontainer container kubepods-besteffort-pod80d646f2_c2b8_4ec5_90f1_97a890b8837a.slice. Jul 6 23:56:44.899599 kubelet[3121]: I0706 23:56:44.558274 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ba973d-dc0d-426a-8adc-f92cde7b6fed-config\") pod \"goldmane-768f4c5c69-8nwl8\" (UID: \"f7ba973d-dc0d-426a-8adc-f92cde7b6fed\") " pod="calico-system/goldmane-768f4c5c69-8nwl8" Jul 6 23:56:44.899599 kubelet[3121]: I0706 23:56:44.558306 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lmlb\" (UniqueName: \"kubernetes.io/projected/315fadc6-402d-4c42-a716-cdde0ac33312-kube-api-access-8lmlb\") pod \"calico-apiserver-5b4985b7cd-b2wsx\" (UID: \"315fadc6-402d-4c42-a716-cdde0ac33312\") " pod="calico-apiserver/calico-apiserver-5b4985b7cd-b2wsx" Jul 6 23:56:44.899599 kubelet[3121]: I0706 23:56:44.558391 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f7ba973d-dc0d-426a-8adc-f92cde7b6fed-goldmane-key-pair\") pod \"goldmane-768f4c5c69-8nwl8\" (UID: \"f7ba973d-dc0d-426a-8adc-f92cde7b6fed\") " pod="calico-system/goldmane-768f4c5c69-8nwl8" Jul 6 23:56:44.899599 kubelet[3121]: I0706 23:56:44.558437 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/668cd08b-4d24-45a3-a679-683237a42032-config-volume\") pod \"coredns-668d6bf9bc-rv7bm\" (UID: \"668cd08b-4d24-45a3-a679-683237a42032\") " pod="kube-system/coredns-668d6bf9bc-rv7bm" Jul 6 23:56:44.899599 kubelet[3121]: I0706 23:56:44.558469 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/315fadc6-402d-4c42-a716-cdde0ac33312-calico-apiserver-certs\") pod \"calico-apiserver-5b4985b7cd-b2wsx\" (UID: \"315fadc6-402d-4c42-a716-cdde0ac33312\") " pod="calico-apiserver/calico-apiserver-5b4985b7cd-b2wsx" Jul 6 23:56:44.413650 systemd[1]: Created slice kubepods-burstable-pod8f8547f6_c7c2_4c77_af76_00fb7e939448.slice - libcontainer container kubepods-burstable-pod8f8547f6_c7c2_4c77_af76_00fb7e939448.slice. Jul 6 23:56:44.899877 kubelet[3121]: I0706 23:56:44.558533 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3050acde-8e24-48b0-af1c-c0021f4ca060-tigera-ca-bundle\") pod \"calico-kube-controllers-68c8fc6bd-5mrzx\" (UID: \"3050acde-8e24-48b0-af1c-c0021f4ca060\") " pod="calico-system/calico-kube-controllers-68c8fc6bd-5mrzx" Jul 6 23:56:44.899877 kubelet[3121]: I0706 23:56:44.558566 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chzt5\" (UniqueName: \"kubernetes.io/projected/3050acde-8e24-48b0-af1c-c0021f4ca060-kube-api-access-chzt5\") pod \"calico-kube-controllers-68c8fc6bd-5mrzx\" (UID: \"3050acde-8e24-48b0-af1c-c0021f4ca060\") " pod="calico-system/calico-kube-controllers-68c8fc6bd-5mrzx" Jul 6 23:56:44.899877 kubelet[3121]: I0706 23:56:44.558603 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7ba973d-dc0d-426a-8adc-f92cde7b6fed-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-8nwl8\" (UID: \"f7ba973d-dc0d-426a-8adc-f92cde7b6fed\") " pod="calico-system/goldmane-768f4c5c69-8nwl8" Jul 6 23:56:44.899877 kubelet[3121]: I0706 23:56:44.558653 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clbzs\" (UniqueName: \"kubernetes.io/projected/8f8547f6-c7c2-4c77-af76-00fb7e939448-kube-api-access-clbzs\") pod \"coredns-668d6bf9bc-bz9rt\" (UID: \"8f8547f6-c7c2-4c77-af76-00fb7e939448\") " pod="kube-system/coredns-668d6bf9bc-bz9rt" Jul 6 23:56:44.899877 kubelet[3121]: I0706 23:56:44.558679 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d596fb90-2a5b-4b75-b0f5-1553ebaf2652-calico-apiserver-certs\") pod \"calico-apiserver-5b4985b7cd-r7qrd\" (UID: \"d596fb90-2a5b-4b75-b0f5-1553ebaf2652\") " pod="calico-apiserver/calico-apiserver-5b4985b7cd-r7qrd" Jul 6 23:56:44.424427 systemd[1]: Created slice kubepods-besteffort-pod6dc25359_c7f1_4176_9811_c8a3b8856ebe.slice - libcontainer container kubepods-besteffort-pod6dc25359_c7f1_4176_9811_c8a3b8856ebe.slice. Jul 6 23:56:44.900258 kubelet[3121]: I0706 23:56:44.558708 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8g47\" (UniqueName: \"kubernetes.io/projected/668cd08b-4d24-45a3-a679-683237a42032-kube-api-access-p8g47\") pod \"coredns-668d6bf9bc-rv7bm\" (UID: \"668cd08b-4d24-45a3-a679-683237a42032\") " pod="kube-system/coredns-668d6bf9bc-rv7bm" Jul 6 23:56:44.900258 kubelet[3121]: I0706 23:56:44.558750 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46ndq\" (UniqueName: \"kubernetes.io/projected/d596fb90-2a5b-4b75-b0f5-1553ebaf2652-kube-api-access-46ndq\") pod \"calico-apiserver-5b4985b7cd-r7qrd\" (UID: \"d596fb90-2a5b-4b75-b0f5-1553ebaf2652\") " pod="calico-apiserver/calico-apiserver-5b4985b7cd-r7qrd" Jul 6 23:56:44.441228 systemd[1]: Created slice kubepods-besteffort-podf7ba973d_dc0d_426a_8adc_f92cde7b6fed.slice - libcontainer container kubepods-besteffort-podf7ba973d_dc0d_426a_8adc_f92cde7b6fed.slice. Jul 6 23:56:44.448320 systemd[1]: Created slice kubepods-besteffort-podd596fb90_2a5b_4b75_b0f5_1553ebaf2652.slice - libcontainer container kubepods-besteffort-podd596fb90_2a5b_4b75_b0f5_1553ebaf2652.slice. Jul 6 23:56:44.454913 systemd[1]: Created slice kubepods-besteffort-pod3050acde_8e24_48b0_af1c_c0021f4ca060.slice - libcontainer container kubepods-besteffort-pod3050acde_8e24_48b0_af1c_c0021f4ca060.slice. Jul 6 23:56:44.464673 systemd[1]: Created slice kubepods-burstable-pod668cd08b_4d24_45a3_a679_683237a42032.slice - libcontainer container kubepods-burstable-pod668cd08b_4d24_45a3_a679_683237a42032.slice. Jul 6 23:56:44.471653 systemd[1]: Created slice kubepods-besteffort-pod315fadc6_402d_4c42_a716_cdde0ac33312.slice - libcontainer container kubepods-besteffort-pod315fadc6_402d_4c42_a716_cdde0ac33312.slice. Jul 6 23:56:44.920014 containerd[1675]: time="2025-07-06T23:56:44.919341624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ldbbw,Uid:80d646f2-c2b8-4ec5-90f1-97a890b8837a,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:45.207901 containerd[1675]: time="2025-07-06T23:56:45.207774733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bz9rt,Uid:8f8547f6-c7c2-4c77-af76-00fb7e939448,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:45.215679 containerd[1675]: time="2025-07-06T23:56:45.215607834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8nwl8,Uid:f7ba973d-dc0d-426a-8adc-f92cde7b6fed,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:45.216055 containerd[1675]: time="2025-07-06T23:56:45.215607834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4985b7cd-r7qrd,Uid:d596fb90-2a5b-4b75-b0f5-1553ebaf2652,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:56:45.222798 containerd[1675]: time="2025-07-06T23:56:45.222715716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68c8fc6bd-5mrzx,Uid:3050acde-8e24-48b0-af1c-c0021f4ca060,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:45.231512 containerd[1675]: time="2025-07-06T23:56:45.231479341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rv7bm,Uid:668cd08b-4d24-45a3-a679-683237a42032,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:45.242135 containerd[1675]: time="2025-07-06T23:56:45.242097814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4985b7cd-b2wsx,Uid:315fadc6-402d-4c42-a716-cdde0ac33312,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:56:45.253940 containerd[1675]: time="2025-07-06T23:56:45.253887217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5497dd78ff-5pz86,Uid:6dc25359-c7f1-4176-9811-c8a3b8856ebe,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:45.549857 containerd[1675]: time="2025-07-06T23:56:45.549659713Z" level=info msg="shim disconnected" id=0d70f78818d8c848a9b722c2e564e4b8980010ab7999acd93ab106956de8aa5f namespace=k8s.io Jul 6 23:56:45.549857 containerd[1675]: time="2025-07-06T23:56:45.549739816Z" level=warning msg="cleaning up after shim disconnected" id=0d70f78818d8c848a9b722c2e564e4b8980010ab7999acd93ab106956de8aa5f namespace=k8s.io Jul 6 23:56:45.549857 containerd[1675]: time="2025-07-06T23:56:45.549756516Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:56:45.903815 containerd[1675]: time="2025-07-06T23:56:45.903758508Z" level=error msg="Failed to destroy network for sandbox \"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:45.907091 containerd[1675]: time="2025-07-06T23:56:45.907012192Z" level=error msg="encountered an error cleaning up failed sandbox \"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:45.907466 containerd[1675]: time="2025-07-06T23:56:45.907423502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rv7bm,Uid:668cd08b-4d24-45a3-a679-683237a42032,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:45.908094 kubelet[3121]: E0706 23:56:45.907938 3121 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:45.909069 kubelet[3121]: E0706 23:56:45.908552 3121 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rv7bm" Jul 6 23:56:45.909069 kubelet[3121]: E0706 23:56:45.908591 3121 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rv7bm" Jul 6 23:56:45.909069 kubelet[3121]: E0706 23:56:45.908656 3121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rv7bm_kube-system(668cd08b-4d24-45a3-a679-683237a42032)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rv7bm_kube-system(668cd08b-4d24-45a3-a679-683237a42032)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rv7bm" podUID="668cd08b-4d24-45a3-a679-683237a42032" Jul 6 23:56:46.017047 containerd[1675]: time="2025-07-06T23:56:46.015982291Z" level=error msg="Failed to destroy network for sandbox \"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.018504 containerd[1675]: time="2025-07-06T23:56:46.016560205Z" level=error msg="encountered an error cleaning up failed sandbox \"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.018627 containerd[1675]: time="2025-07-06T23:56:46.018552657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8nwl8,Uid:f7ba973d-dc0d-426a-8adc-f92cde7b6fed,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.018964 kubelet[3121]: E0706 23:56:46.018879 3121 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.019098 kubelet[3121]: E0706 23:56:46.019005 3121 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-8nwl8" Jul 6 23:56:46.019098 kubelet[3121]: E0706 23:56:46.019060 3121 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-8nwl8" Jul 6 23:56:46.019230 kubelet[3121]: E0706 23:56:46.019147 3121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-8nwl8_calico-system(f7ba973d-dc0d-426a-8adc-f92cde7b6fed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-8nwl8_calico-system(f7ba973d-dc0d-426a-8adc-f92cde7b6fed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-8nwl8" podUID="f7ba973d-dc0d-426a-8adc-f92cde7b6fed" Jul 6 23:56:46.029007 containerd[1675]: time="2025-07-06T23:56:46.028962424Z" level=error msg="Failed to destroy network for sandbox \"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.029393 containerd[1675]: time="2025-07-06T23:56:46.029344934Z" level=error msg="Failed to destroy network for sandbox \"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.029857 containerd[1675]: time="2025-07-06T23:56:46.029820646Z" level=error msg="encountered an error cleaning up failed sandbox \"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.030523 containerd[1675]: time="2025-07-06T23:56:46.030481563Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ldbbw,Uid:80d646f2-c2b8-4ec5-90f1-97a890b8837a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.030793 kubelet[3121]: E0706 23:56:46.030750 3121 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.030881 kubelet[3121]: E0706 23:56:46.030819 3121 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ldbbw" Jul 6 23:56:46.030881 kubelet[3121]: E0706 23:56:46.030874 3121 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ldbbw" Jul 6 23:56:46.030971 kubelet[3121]: E0706 23:56:46.030920 3121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ldbbw_calico-system(80d646f2-c2b8-4ec5-90f1-97a890b8837a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ldbbw_calico-system(80d646f2-c2b8-4ec5-90f1-97a890b8837a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ldbbw" podUID="80d646f2-c2b8-4ec5-90f1-97a890b8837a" Jul 6 23:56:46.035312 containerd[1675]: time="2025-07-06T23:56:46.035273786Z" level=error msg="encountered an error cleaning up failed sandbox \"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.035392 containerd[1675]: time="2025-07-06T23:56:46.035342088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4985b7cd-b2wsx,Uid:315fadc6-402d-4c42-a716-cdde0ac33312,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.035593 kubelet[3121]: E0706 23:56:46.035538 3121 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.035655 kubelet[3121]: E0706 23:56:46.035617 3121 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b4985b7cd-b2wsx" Jul 6 23:56:46.035705 kubelet[3121]: E0706 23:56:46.035655 3121 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b4985b7cd-b2wsx" Jul 6 23:56:46.035771 kubelet[3121]: E0706 23:56:46.035739 3121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b4985b7cd-b2wsx_calico-apiserver(315fadc6-402d-4c42-a716-cdde0ac33312)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b4985b7cd-b2wsx_calico-apiserver(315fadc6-402d-4c42-a716-cdde0ac33312)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b4985b7cd-b2wsx" podUID="315fadc6-402d-4c42-a716-cdde0ac33312" Jul 6 23:56:46.049419 containerd[1675]: time="2025-07-06T23:56:46.049272746Z" level=error msg="Failed to destroy network for sandbox \"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.049938 containerd[1675]: time="2025-07-06T23:56:46.049817260Z" level=error msg="encountered an error cleaning up failed sandbox \"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.049938 containerd[1675]: time="2025-07-06T23:56:46.049884261Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4985b7cd-r7qrd,Uid:d596fb90-2a5b-4b75-b0f5-1553ebaf2652,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.050930 kubelet[3121]: E0706 23:56:46.050257 3121 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.050930 kubelet[3121]: E0706 23:56:46.050324 3121 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b4985b7cd-r7qrd" Jul 6 23:56:46.050930 kubelet[3121]: E0706 23:56:46.050348 3121 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b4985b7cd-r7qrd" Jul 6 23:56:46.051130 kubelet[3121]: E0706 23:56:46.050399 3121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b4985b7cd-r7qrd_calico-apiserver(d596fb90-2a5b-4b75-b0f5-1553ebaf2652)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b4985b7cd-r7qrd_calico-apiserver(d596fb90-2a5b-4b75-b0f5-1553ebaf2652)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b4985b7cd-r7qrd" podUID="d596fb90-2a5b-4b75-b0f5-1553ebaf2652" Jul 6 23:56:46.059826 containerd[1675]: time="2025-07-06T23:56:46.059768315Z" level=error msg="Failed to destroy network for sandbox \"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.060316 containerd[1675]: time="2025-07-06T23:56:46.060279828Z" level=error msg="encountered an error cleaning up failed sandbox \"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.060511 containerd[1675]: time="2025-07-06T23:56:46.060481333Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68c8fc6bd-5mrzx,Uid:3050acde-8e24-48b0-af1c-c0021f4ca060,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.060907 kubelet[3121]: E0706 23:56:46.060867 3121 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.060999 kubelet[3121]: E0706 23:56:46.060931 3121 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68c8fc6bd-5mrzx" Jul 6 23:56:46.060999 kubelet[3121]: E0706 23:56:46.060965 3121 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68c8fc6bd-5mrzx" Jul 6 23:56:46.061613 kubelet[3121]: E0706 23:56:46.061567 3121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68c8fc6bd-5mrzx_calico-system(3050acde-8e24-48b0-af1c-c0021f4ca060)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68c8fc6bd-5mrzx_calico-system(3050acde-8e24-48b0-af1c-c0021f4ca060)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68c8fc6bd-5mrzx" podUID="3050acde-8e24-48b0-af1c-c0021f4ca060" Jul 6 23:56:46.066591 containerd[1675]: time="2025-07-06T23:56:46.066525889Z" level=error msg="Failed to destroy network for sandbox \"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.066872 containerd[1675]: time="2025-07-06T23:56:46.066767595Z" level=error msg="Failed to destroy network for sandbox \"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.067270 containerd[1675]: time="2025-07-06T23:56:46.067092303Z" level=error msg="encountered an error cleaning up failed sandbox \"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.067270 containerd[1675]: time="2025-07-06T23:56:46.067161205Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bz9rt,Uid:8f8547f6-c7c2-4c77-af76-00fb7e939448,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.067270 containerd[1675]: time="2025-07-06T23:56:46.067209606Z" level=error msg="encountered an error cleaning up failed sandbox \"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.067502 containerd[1675]: time="2025-07-06T23:56:46.067475313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5497dd78ff-5pz86,Uid:6dc25359-c7f1-4176-9811-c8a3b8856ebe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.067594 kubelet[3121]: E0706 23:56:46.067528 3121 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.067652 kubelet[3121]: E0706 23:56:46.067593 3121 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bz9rt" Jul 6 23:56:46.067652 kubelet[3121]: E0706 23:56:46.067616 3121 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bz9rt" Jul 6 23:56:46.067741 kubelet[3121]: E0706 23:56:46.067670 3121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-bz9rt_kube-system(8f8547f6-c7c2-4c77-af76-00fb7e939448)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-bz9rt_kube-system(8f8547f6-c7c2-4c77-af76-00fb7e939448)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bz9rt" podUID="8f8547f6-c7c2-4c77-af76-00fb7e939448" Jul 6 23:56:46.068059 kubelet[3121]: E0706 23:56:46.068010 3121 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.068152 kubelet[3121]: E0706 23:56:46.068073 3121 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5497dd78ff-5pz86" Jul 6 23:56:46.068152 kubelet[3121]: E0706 23:56:46.068099 3121 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5497dd78ff-5pz86" Jul 6 23:56:46.068245 kubelet[3121]: E0706 23:56:46.068172 3121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5497dd78ff-5pz86_calico-system(6dc25359-c7f1-4176-9811-c8a3b8856ebe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5497dd78ff-5pz86_calico-system(6dc25359-c7f1-4176-9811-c8a3b8856ebe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5497dd78ff-5pz86" podUID="6dc25359-c7f1-4176-9811-c8a3b8856ebe" Jul 6 23:56:46.305736 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782-shm.mount: Deactivated successfully. Jul 6 23:56:46.305887 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378-shm.mount: Deactivated successfully. Jul 6 23:56:46.305993 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705-shm.mount: Deactivated successfully. Jul 6 23:56:46.306121 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719-shm.mount: Deactivated successfully. Jul 6 23:56:46.489679 kubelet[3121]: I0706 23:56:46.489642 3121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Jul 6 23:56:46.490622 containerd[1675]: time="2025-07-06T23:56:46.490589080Z" level=info msg="StopPodSandbox for \"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\"" Jul 6 23:56:46.491095 containerd[1675]: time="2025-07-06T23:56:46.491055492Z" level=info msg="Ensure that sandbox c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341 in task-service has been cleanup successfully" Jul 6 23:56:46.492118 kubelet[3121]: I0706 23:56:46.491541 3121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Jul 6 23:56:46.492586 containerd[1675]: time="2025-07-06T23:56:46.492550831Z" level=info msg="StopPodSandbox for \"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\"" Jul 6 23:56:46.492917 containerd[1675]: time="2025-07-06T23:56:46.492840638Z" level=info msg="Ensure that sandbox 70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719 in task-service has been cleanup successfully" Jul 6 23:56:46.495241 kubelet[3121]: I0706 23:56:46.495191 3121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Jul 6 23:56:46.495936 containerd[1675]: time="2025-07-06T23:56:46.495794914Z" level=info msg="StopPodSandbox for \"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\"" Jul 6 23:56:46.496191 containerd[1675]: time="2025-07-06T23:56:46.496162323Z" level=info msg="Ensure that sandbox 0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94 in task-service has been cleanup successfully" Jul 6 23:56:46.496986 kubelet[3121]: I0706 23:56:46.496924 3121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Jul 6 23:56:46.499998 containerd[1675]: time="2025-07-06T23:56:46.499896219Z" level=info msg="StopPodSandbox for \"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\"" Jul 6 23:56:46.501465 containerd[1675]: time="2025-07-06T23:56:46.500867744Z" level=info msg="Ensure that sandbox 7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782 in task-service has been cleanup successfully" Jul 6 23:56:46.501750 kubelet[3121]: I0706 23:56:46.501730 3121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Jul 6 23:56:46.504724 containerd[1675]: time="2025-07-06T23:56:46.504302233Z" level=info msg="StopPodSandbox for \"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\"" Jul 6 23:56:46.504724 containerd[1675]: time="2025-07-06T23:56:46.504477837Z" level=info msg="Ensure that sandbox 18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705 in task-service has been cleanup successfully" Jul 6 23:56:46.516841 containerd[1675]: time="2025-07-06T23:56:46.516797653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 6 23:56:46.530673 kubelet[3121]: I0706 23:56:46.530639 3121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Jul 6 23:56:46.551042 containerd[1675]: time="2025-07-06T23:56:46.550755126Z" level=info msg="StopPodSandbox for \"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\"" Jul 6 23:56:46.551042 containerd[1675]: time="2025-07-06T23:56:46.550998632Z" level=info msg="Ensure that sandbox f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47 in task-service has been cleanup successfully" Jul 6 23:56:46.555386 kubelet[3121]: I0706 23:56:46.554879 3121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Jul 6 23:56:46.558555 containerd[1675]: time="2025-07-06T23:56:46.558459223Z" level=info msg="StopPodSandbox for \"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\"" Jul 6 23:56:46.559432 containerd[1675]: time="2025-07-06T23:56:46.558851134Z" level=info msg="Ensure that sandbox 4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542 in task-service has been cleanup successfully" Jul 6 23:56:46.572621 kubelet[3121]: I0706 23:56:46.571474 3121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Jul 6 23:56:46.575671 containerd[1675]: time="2025-07-06T23:56:46.575069150Z" level=info msg="StopPodSandbox for \"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\"" Jul 6 23:56:46.578526 containerd[1675]: time="2025-07-06T23:56:46.577842321Z" level=info msg="Ensure that sandbox 74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378 in task-service has been cleanup successfully" Jul 6 23:56:46.657723 containerd[1675]: time="2025-07-06T23:56:46.657668372Z" level=error msg="StopPodSandbox for \"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\" failed" error="failed to destroy network for sandbox \"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.658131 kubelet[3121]: E0706 23:56:46.658089 3121 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Jul 6 23:56:46.658253 kubelet[3121]: E0706 23:56:46.658176 3121 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94"} Jul 6 23:56:46.658323 kubelet[3121]: E0706 23:56:46.658285 3121 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d596fb90-2a5b-4b75-b0f5-1553ebaf2652\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:46.659160 kubelet[3121]: E0706 23:56:46.658660 3121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d596fb90-2a5b-4b75-b0f5-1553ebaf2652\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b4985b7cd-r7qrd" podUID="d596fb90-2a5b-4b75-b0f5-1553ebaf2652" Jul 6 23:56:46.665282 containerd[1675]: time="2025-07-06T23:56:46.665228666Z" level=error msg="StopPodSandbox for \"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\" failed" error="failed to destroy network for sandbox \"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.665450 kubelet[3121]: E0706 23:56:46.665414 3121 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Jul 6 23:56:46.665562 kubelet[3121]: E0706 23:56:46.665467 3121 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705"} Jul 6 23:56:46.665562 kubelet[3121]: E0706 23:56:46.665509 3121 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"80d646f2-c2b8-4ec5-90f1-97a890b8837a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:46.665562 kubelet[3121]: E0706 23:56:46.665541 3121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"80d646f2-c2b8-4ec5-90f1-97a890b8837a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ldbbw" podUID="80d646f2-c2b8-4ec5-90f1-97a890b8837a" Jul 6 23:56:46.675219 containerd[1675]: time="2025-07-06T23:56:46.675163821Z" level=error msg="StopPodSandbox for \"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\" failed" error="failed to destroy network for sandbox \"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.675505 kubelet[3121]: E0706 23:56:46.675381 3121 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Jul 6 23:56:46.675505 kubelet[3121]: E0706 23:56:46.675437 3121 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719"} Jul 6 23:56:46.675505 kubelet[3121]: E0706 23:56:46.675477 3121 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"668cd08b-4d24-45a3-a679-683237a42032\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:46.675952 kubelet[3121]: E0706 23:56:46.675506 3121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"668cd08b-4d24-45a3-a679-683237a42032\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rv7bm" podUID="668cd08b-4d24-45a3-a679-683237a42032" Jul 6 23:56:46.687154 containerd[1675]: time="2025-07-06T23:56:46.686699217Z" level=error msg="StopPodSandbox for \"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\" failed" error="failed to destroy network for sandbox \"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.687322 kubelet[3121]: E0706 23:56:46.687136 3121 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Jul 6 23:56:46.687322 kubelet[3121]: E0706 23:56:46.687186 3121 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782"} Jul 6 23:56:46.687322 kubelet[3121]: E0706 23:56:46.687225 3121 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f7ba973d-dc0d-426a-8adc-f92cde7b6fed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:46.687322 kubelet[3121]: E0706 23:56:46.687255 3121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f7ba973d-dc0d-426a-8adc-f92cde7b6fed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-8nwl8" podUID="f7ba973d-dc0d-426a-8adc-f92cde7b6fed" Jul 6 23:56:46.691838 containerd[1675]: time="2025-07-06T23:56:46.691273035Z" level=error msg="StopPodSandbox for \"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\" failed" error="failed to destroy network for sandbox \"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.691953 kubelet[3121]: E0706 23:56:46.691462 3121 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Jul 6 23:56:46.691953 kubelet[3121]: E0706 23:56:46.691506 3121 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341"} Jul 6 23:56:46.691953 kubelet[3121]: E0706 23:56:46.691549 3121 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"315fadc6-402d-4c42-a716-cdde0ac33312\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:46.691953 kubelet[3121]: E0706 23:56:46.691582 3121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"315fadc6-402d-4c42-a716-cdde0ac33312\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b4985b7cd-b2wsx" podUID="315fadc6-402d-4c42-a716-cdde0ac33312" Jul 6 23:56:46.718750 containerd[1675]: time="2025-07-06T23:56:46.718691439Z" level=error msg="StopPodSandbox for \"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\" failed" error="failed to destroy network for sandbox \"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.719252 kubelet[3121]: E0706 23:56:46.719192 3121 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Jul 6 23:56:46.719369 kubelet[3121]: E0706 23:56:46.719271 3121 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542"} Jul 6 23:56:46.719369 kubelet[3121]: E0706 23:56:46.719312 3121 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3050acde-8e24-48b0-af1c-c0021f4ca060\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:46.719369 kubelet[3121]: E0706 23:56:46.719347 3121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3050acde-8e24-48b0-af1c-c0021f4ca060\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68c8fc6bd-5mrzx" podUID="3050acde-8e24-48b0-af1c-c0021f4ca060" Jul 6 23:56:46.725843 containerd[1675]: time="2025-07-06T23:56:46.725754320Z" level=error msg="StopPodSandbox for \"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\" failed" error="failed to destroy network for sandbox \"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.726199 kubelet[3121]: E0706 23:56:46.726050 3121 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Jul 6 23:56:46.726199 kubelet[3121]: E0706 23:56:46.726108 3121 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378"} Jul 6 23:56:46.726199 kubelet[3121]: E0706 23:56:46.726163 3121 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8f8547f6-c7c2-4c77-af76-00fb7e939448\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:46.726478 kubelet[3121]: E0706 23:56:46.726206 3121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8f8547f6-c7c2-4c77-af76-00fb7e939448\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bz9rt" podUID="8f8547f6-c7c2-4c77-af76-00fb7e939448" Jul 6 23:56:46.727034 containerd[1675]: time="2025-07-06T23:56:46.726973052Z" level=error msg="StopPodSandbox for \"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\" failed" error="failed to destroy network for sandbox \"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:46.727221 kubelet[3121]: E0706 23:56:46.727187 3121 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Jul 6 23:56:46.727311 kubelet[3121]: E0706 23:56:46.727238 3121 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47"} Jul 6 23:56:46.727311 kubelet[3121]: E0706 23:56:46.727275 3121 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6dc25359-c7f1-4176-9811-c8a3b8856ebe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:46.727410 kubelet[3121]: E0706 23:56:46.727305 3121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6dc25359-c7f1-4176-9811-c8a3b8856ebe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5497dd78ff-5pz86" podUID="6dc25359-c7f1-4176-9811-c8a3b8856ebe" Jul 6 23:56:49.134264 kubelet[3121]: I0706 23:56:49.133970 3121 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:56:52.992685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1952370025.mount: Deactivated successfully. Jul 6 23:56:53.027308 containerd[1675]: time="2025-07-06T23:56:53.027253270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:53.030884 containerd[1675]: time="2025-07-06T23:56:53.030807455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 6 23:56:53.034835 containerd[1675]: time="2025-07-06T23:56:53.034777950Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:53.040513 containerd[1675]: time="2025-07-06T23:56:53.040457086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:53.041499 containerd[1675]: time="2025-07-06T23:56:53.041010199Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.524163245s" Jul 6 23:56:53.041499 containerd[1675]: time="2025-07-06T23:56:53.041063501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 6 23:56:53.049703 containerd[1675]: time="2025-07-06T23:56:53.049408300Z" level=info msg="CreateContainer within sandbox \"4691df4ff6e20640e82b71d164cede3087a2e3a39357f5feab132804abdd1d31\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 6 23:56:53.105613 containerd[1675]: time="2025-07-06T23:56:53.105569145Z" level=info msg="CreateContainer within sandbox \"4691df4ff6e20640e82b71d164cede3087a2e3a39357f5feab132804abdd1d31\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f67949894e6be477a2d3f53a9a4c87f253c3066a507c60d9f3d97490414b43a1\"" Jul 6 23:56:53.106278 containerd[1675]: time="2025-07-06T23:56:53.106248861Z" level=info msg="StartContainer for \"f67949894e6be477a2d3f53a9a4c87f253c3066a507c60d9f3d97490414b43a1\"" Jul 6 23:56:53.137187 systemd[1]: Started cri-containerd-f67949894e6be477a2d3f53a9a4c87f253c3066a507c60d9f3d97490414b43a1.scope - libcontainer container f67949894e6be477a2d3f53a9a4c87f253c3066a507c60d9f3d97490414b43a1. Jul 6 23:56:53.170356 containerd[1675]: time="2025-07-06T23:56:53.170183692Z" level=info msg="StartContainer for \"f67949894e6be477a2d3f53a9a4c87f253c3066a507c60d9f3d97490414b43a1\" returns successfully" Jul 6 23:56:53.367493 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 6 23:56:53.367643 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 6 23:56:53.510570 containerd[1675]: time="2025-07-06T23:56:53.510521242Z" level=info msg="StopPodSandbox for \"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\"" Jul 6 23:56:53.689748 containerd[1675]: 2025-07-06 23:56:53.609 [INFO][4354] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Jul 6 23:56:53.689748 containerd[1675]: 2025-07-06 23:56:53.610 [INFO][4354] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" iface="eth0" netns="/var/run/netns/cni-1a4329ad-f52a-e9f3-d1c0-185becc97b9a" Jul 6 23:56:53.689748 containerd[1675]: 2025-07-06 23:56:53.610 [INFO][4354] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" iface="eth0" netns="/var/run/netns/cni-1a4329ad-f52a-e9f3-d1c0-185becc97b9a" Jul 6 23:56:53.689748 containerd[1675]: 2025-07-06 23:56:53.610 [INFO][4354] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" iface="eth0" netns="/var/run/netns/cni-1a4329ad-f52a-e9f3-d1c0-185becc97b9a" Jul 6 23:56:53.689748 containerd[1675]: 2025-07-06 23:56:53.610 [INFO][4354] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Jul 6 23:56:53.689748 containerd[1675]: 2025-07-06 23:56:53.610 [INFO][4354] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Jul 6 23:56:53.689748 containerd[1675]: 2025-07-06 23:56:53.666 [INFO][4361] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" HandleID="k8s-pod-network.f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-whisker--5497dd78ff--5pz86-eth0" Jul 6 23:56:53.689748 containerd[1675]: 2025-07-06 23:56:53.667 [INFO][4361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:53.689748 containerd[1675]: 2025-07-06 23:56:53.667 [INFO][4361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:53.689748 containerd[1675]: 2025-07-06 23:56:53.681 [WARNING][4361] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" HandleID="k8s-pod-network.f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-whisker--5497dd78ff--5pz86-eth0" Jul 6 23:56:53.689748 containerd[1675]: 2025-07-06 23:56:53.681 [INFO][4361] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" HandleID="k8s-pod-network.f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-whisker--5497dd78ff--5pz86-eth0" Jul 6 23:56:53.689748 containerd[1675]: 2025-07-06 23:56:53.683 [INFO][4361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:53.689748 containerd[1675]: 2025-07-06 23:56:53.686 [INFO][4354] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Jul 6 23:56:53.691254 containerd[1675]: time="2025-07-06T23:56:53.690156643Z" level=info msg="TearDown network for sandbox \"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\" successfully" Jul 6 23:56:53.691254 containerd[1675]: time="2025-07-06T23:56:53.690193244Z" level=info msg="StopPodSandbox for \"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\" returns successfully" Jul 6 23:56:53.715057 kubelet[3121]: I0706 23:56:53.714624 3121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dc25359-c7f1-4176-9811-c8a3b8856ebe-whisker-ca-bundle\") pod \"6dc25359-c7f1-4176-9811-c8a3b8856ebe\" (UID: \"6dc25359-c7f1-4176-9811-c8a3b8856ebe\") " Jul 6 23:56:53.715057 kubelet[3121]: I0706 23:56:53.714681 3121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6dc25359-c7f1-4176-9811-c8a3b8856ebe-whisker-backend-key-pair\") pod \"6dc25359-c7f1-4176-9811-c8a3b8856ebe\" (UID: \"6dc25359-c7f1-4176-9811-c8a3b8856ebe\") " Jul 6 23:56:53.715057 kubelet[3121]: I0706 23:56:53.714717 3121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t4h7\" (UniqueName: \"kubernetes.io/projected/6dc25359-c7f1-4176-9811-c8a3b8856ebe-kube-api-access-7t4h7\") pod \"6dc25359-c7f1-4176-9811-c8a3b8856ebe\" (UID: \"6dc25359-c7f1-4176-9811-c8a3b8856ebe\") " Jul 6 23:56:53.720188 kubelet[3121]: I0706 23:56:53.719825 3121 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dc25359-c7f1-4176-9811-c8a3b8856ebe-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6dc25359-c7f1-4176-9811-c8a3b8856ebe" (UID: "6dc25359-c7f1-4176-9811-c8a3b8856ebe"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:56:53.724196 kubelet[3121]: I0706 23:56:53.724076 3121 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dc25359-c7f1-4176-9811-c8a3b8856ebe-kube-api-access-7t4h7" (OuterVolumeSpecName: "kube-api-access-7t4h7") pod "6dc25359-c7f1-4176-9811-c8a3b8856ebe" (UID: "6dc25359-c7f1-4176-9811-c8a3b8856ebe"). InnerVolumeSpecName "kube-api-access-7t4h7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:56:53.724196 kubelet[3121]: I0706 23:56:53.724076 3121 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dc25359-c7f1-4176-9811-c8a3b8856ebe-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6dc25359-c7f1-4176-9811-c8a3b8856ebe" (UID: "6dc25359-c7f1-4176-9811-c8a3b8856ebe"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:56:53.816268 kubelet[3121]: I0706 23:56:53.816131 3121 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dc25359-c7f1-4176-9811-c8a3b8856ebe-whisker-ca-bundle\") on node \"ci-4081.3.4-a-2f8c6d8615\" DevicePath \"\"" Jul 6 23:56:53.816268 kubelet[3121]: I0706 23:56:53.816177 3121 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6dc25359-c7f1-4176-9811-c8a3b8856ebe-whisker-backend-key-pair\") on node \"ci-4081.3.4-a-2f8c6d8615\" DevicePath \"\"" Jul 6 23:56:53.816268 kubelet[3121]: I0706 23:56:53.816192 3121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7t4h7\" (UniqueName: \"kubernetes.io/projected/6dc25359-c7f1-4176-9811-c8a3b8856ebe-kube-api-access-7t4h7\") on node \"ci-4081.3.4-a-2f8c6d8615\" DevicePath \"\"" Jul 6 23:56:53.991907 systemd[1]: run-netns-cni\x2d1a4329ad\x2df52a\x2de9f3\x2dd1c0\x2d185becc97b9a.mount: Deactivated successfully. Jul 6 23:56:53.992076 systemd[1]: var-lib-kubelet-pods-6dc25359\x2dc7f1\x2d4176\x2d9811\x2dc8a3b8856ebe-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 6 23:56:53.992181 systemd[1]: var-lib-kubelet-pods-6dc25359\x2dc7f1\x2d4176\x2d9811\x2dc8a3b8856ebe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7t4h7.mount: Deactivated successfully. Jul 6 23:56:54.371314 systemd[1]: Removed slice kubepods-besteffort-pod6dc25359_c7f1_4176_9811_c8a3b8856ebe.slice - libcontainer container kubepods-besteffort-pod6dc25359_c7f1_4176_9811_c8a3b8856ebe.slice. Jul 6 23:56:54.609015 kubelet[3121]: I0706 23:56:54.608930 3121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-86pg2" podStartSLOduration=2.146820361 podStartE2EDuration="20.608908042s" podCreationTimestamp="2025-07-06 23:56:34 +0000 UTC" firstStartedPulling="2025-07-06 23:56:34.579699537 +0000 UTC m=+18.376922497" lastFinishedPulling="2025-07-06 23:56:53.041787318 +0000 UTC m=+36.839010178" observedRunningTime="2025-07-06 23:56:53.63992774 +0000 UTC m=+37.437150600" watchObservedRunningTime="2025-07-06 23:56:54.608908042 +0000 UTC m=+38.406131002" Jul 6 23:56:54.669896 systemd[1]: Created slice kubepods-besteffort-pod2e16c9e1_cae8_40c2_93c2_24aae9bc2851.slice - libcontainer container kubepods-besteffort-pod2e16c9e1_cae8_40c2_93c2_24aae9bc2851.slice. Jul 6 23:56:54.722101 kubelet[3121]: I0706 23:56:54.722045 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsqst\" (UniqueName: \"kubernetes.io/projected/2e16c9e1-cae8-40c2-93c2-24aae9bc2851-kube-api-access-fsqst\") pod \"whisker-58b7fd8c9c-w4wlf\" (UID: \"2e16c9e1-cae8-40c2-93c2-24aae9bc2851\") " pod="calico-system/whisker-58b7fd8c9c-w4wlf" Jul 6 23:56:54.722525 kubelet[3121]: I0706 23:56:54.722116 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2e16c9e1-cae8-40c2-93c2-24aae9bc2851-whisker-backend-key-pair\") pod \"whisker-58b7fd8c9c-w4wlf\" (UID: \"2e16c9e1-cae8-40c2-93c2-24aae9bc2851\") " pod="calico-system/whisker-58b7fd8c9c-w4wlf" Jul 6 23:56:54.722525 kubelet[3121]: I0706 23:56:54.722144 3121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e16c9e1-cae8-40c2-93c2-24aae9bc2851-whisker-ca-bundle\") pod \"whisker-58b7fd8c9c-w4wlf\" (UID: \"2e16c9e1-cae8-40c2-93c2-24aae9bc2851\") " pod="calico-system/whisker-58b7fd8c9c-w4wlf" Jul 6 23:56:54.976139 containerd[1675]: time="2025-07-06T23:56:54.974962807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58b7fd8c9c-w4wlf,Uid:2e16c9e1-cae8-40c2-93c2-24aae9bc2851,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:55.204935 systemd-networkd[1418]: calidd275194798: Link UP Jul 6 23:56:55.206155 systemd-networkd[1418]: calidd275194798: Gained carrier Jul 6 23:56:55.235360 containerd[1675]: 2025-07-06 23:56:55.065 [INFO][4469] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:56:55.235360 containerd[1675]: 2025-07-06 23:56:55.080 [INFO][4469] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--2f8c6d8615-k8s-whisker--58b7fd8c9c--w4wlf-eth0 whisker-58b7fd8c9c- calico-system 2e16c9e1-cae8-40c2-93c2-24aae9bc2851 898 0 2025-07-06 23:56:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:58b7fd8c9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.4-a-2f8c6d8615 whisker-58b7fd8c9c-w4wlf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidd275194798 [] [] }} ContainerID="a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1" Namespace="calico-system" Pod="whisker-58b7fd8c9c-w4wlf" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-whisker--58b7fd8c9c--w4wlf-" Jul 6 23:56:55.235360 containerd[1675]: 2025-07-06 23:56:55.080 [INFO][4469] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1" Namespace="calico-system" Pod="whisker-58b7fd8c9c-w4wlf" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-whisker--58b7fd8c9c--w4wlf-eth0" Jul 6 23:56:55.235360 containerd[1675]: 2025-07-06 23:56:55.131 [INFO][4481] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1" HandleID="k8s-pod-network.a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-whisker--58b7fd8c9c--w4wlf-eth0" Jul 6 23:56:55.235360 containerd[1675]: 2025-07-06 23:56:55.132 [INFO][4481] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1" HandleID="k8s-pod-network.a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-whisker--58b7fd8c9c--w4wlf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5a00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-2f8c6d8615", "pod":"whisker-58b7fd8c9c-w4wlf", "timestamp":"2025-07-06 23:56:55.131979867 +0000 UTC"}, Hostname:"ci-4081.3.4-a-2f8c6d8615", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:55.235360 containerd[1675]: 2025-07-06 23:56:55.132 [INFO][4481] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:55.235360 containerd[1675]: 2025-07-06 23:56:55.132 [INFO][4481] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:55.235360 containerd[1675]: 2025-07-06 23:56:55.132 [INFO][4481] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-2f8c6d8615' Jul 6 23:56:55.235360 containerd[1675]: 2025-07-06 23:56:55.140 [INFO][4481] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:55.235360 containerd[1675]: 2025-07-06 23:56:55.146 [INFO][4481] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:55.235360 containerd[1675]: 2025-07-06 23:56:55.150 [INFO][4481] ipam/ipam.go 511: Trying affinity for 192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:55.235360 containerd[1675]: 2025-07-06 23:56:55.152 [INFO][4481] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:55.235360 containerd[1675]: 2025-07-06 23:56:55.157 [INFO][4481] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:55.235360 containerd[1675]: 2025-07-06 23:56:55.157 [INFO][4481] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:55.235360 containerd[1675]: 2025-07-06 23:56:55.159 [INFO][4481] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1 Jul 6 23:56:55.235360 containerd[1675]: 2025-07-06 23:56:55.165 [INFO][4481] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:55.235360 containerd[1675]: 2025-07-06 23:56:55.173 [INFO][4481] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.72.129/26] block=192.168.72.128/26 handle="k8s-pod-network.a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:55.235360 containerd[1675]: 2025-07-06 23:56:55.174 [INFO][4481] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.129/26] handle="k8s-pod-network.a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:55.235360 containerd[1675]: 2025-07-06 23:56:55.174 [INFO][4481] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:55.235360 containerd[1675]: 2025-07-06 23:56:55.174 [INFO][4481] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.129/26] IPv6=[] ContainerID="a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1" HandleID="k8s-pod-network.a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-whisker--58b7fd8c9c--w4wlf-eth0" Jul 6 23:56:55.240536 containerd[1675]: 2025-07-06 23:56:55.176 [INFO][4469] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1" Namespace="calico-system" Pod="whisker-58b7fd8c9c-w4wlf" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-whisker--58b7fd8c9c--w4wlf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-whisker--58b7fd8c9c--w4wlf-eth0", GenerateName:"whisker-58b7fd8c9c-", Namespace:"calico-system", SelfLink:"", UID:"2e16c9e1-cae8-40c2-93c2-24aae9bc2851", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58b7fd8c9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"", Pod:"whisker-58b7fd8c9c-w4wlf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.72.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidd275194798", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:55.240536 containerd[1675]: 2025-07-06 23:56:55.176 [INFO][4469] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.129/32] ContainerID="a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1" Namespace="calico-system" Pod="whisker-58b7fd8c9c-w4wlf" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-whisker--58b7fd8c9c--w4wlf-eth0" Jul 6 23:56:55.240536 containerd[1675]: 2025-07-06 23:56:55.176 [INFO][4469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd275194798 ContainerID="a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1" Namespace="calico-system" Pod="whisker-58b7fd8c9c-w4wlf" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-whisker--58b7fd8c9c--w4wlf-eth0" Jul 6 23:56:55.240536 containerd[1675]: 2025-07-06 23:56:55.206 [INFO][4469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1" Namespace="calico-system" Pod="whisker-58b7fd8c9c-w4wlf" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-whisker--58b7fd8c9c--w4wlf-eth0" Jul 6 23:56:55.240536 containerd[1675]: 2025-07-06 23:56:55.208 [INFO][4469] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1" Namespace="calico-system" Pod="whisker-58b7fd8c9c-w4wlf" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-whisker--58b7fd8c9c--w4wlf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-whisker--58b7fd8c9c--w4wlf-eth0", GenerateName:"whisker-58b7fd8c9c-", Namespace:"calico-system", SelfLink:"", UID:"2e16c9e1-cae8-40c2-93c2-24aae9bc2851", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58b7fd8c9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1", Pod:"whisker-58b7fd8c9c-w4wlf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.72.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidd275194798", MAC:"b2:db:89:d2:80:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:55.240536 containerd[1675]: 2025-07-06 23:56:55.227 [INFO][4469] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1" Namespace="calico-system" Pod="whisker-58b7fd8c9c-w4wlf" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-whisker--58b7fd8c9c--w4wlf-eth0" Jul 6 23:56:55.281132 containerd[1675]: time="2025-07-06T23:56:55.280565824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:55.281132 containerd[1675]: time="2025-07-06T23:56:55.280620426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:55.281132 containerd[1675]: time="2025-07-06T23:56:55.280715928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:55.281132 containerd[1675]: time="2025-07-06T23:56:55.280838731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:55.318664 systemd[1]: run-containerd-runc-k8s.io-a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1-runc.FpYqbf.mount: Deactivated successfully. Jul 6 23:56:55.328738 systemd[1]: Started cri-containerd-a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1.scope - libcontainer container a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1. Jul 6 23:56:55.409276 containerd[1675]: time="2025-07-06T23:56:55.409236705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58b7fd8c9c-w4wlf,Uid:2e16c9e1-cae8-40c2-93c2-24aae9bc2851,Namespace:calico-system,Attempt:0,} returns sandbox id \"a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1\"" Jul 6 23:56:55.415051 containerd[1675]: time="2025-07-06T23:56:55.413898117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 6 23:56:55.480141 kernel: bpftool[4564]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 6 23:56:55.795074 systemd-networkd[1418]: vxlan.calico: Link UP Jul 6 23:56:55.795084 systemd-networkd[1418]: vxlan.calico: Gained carrier Jul 6 23:56:56.323259 systemd-networkd[1418]: calidd275194798: Gained IPv6LL Jul 6 23:56:56.367476 kubelet[3121]: I0706 23:56:56.367426 3121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dc25359-c7f1-4176-9811-c8a3b8856ebe" path="/var/lib/kubelet/pods/6dc25359-c7f1-4176-9811-c8a3b8856ebe/volumes" Jul 6 23:56:56.597452 containerd[1675]: time="2025-07-06T23:56:56.596990407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:56.599333 containerd[1675]: time="2025-07-06T23:56:56.599266662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 6 23:56:56.603151 containerd[1675]: time="2025-07-06T23:56:56.603117856Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:56.609816 containerd[1675]: time="2025-07-06T23:56:56.609642015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:56.615277 containerd[1675]: time="2025-07-06T23:56:56.614801740Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.200859222s" Jul 6 23:56:56.615277 containerd[1675]: time="2025-07-06T23:56:56.614848141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 6 23:56:56.620536 containerd[1675]: time="2025-07-06T23:56:56.620500179Z" level=info msg="CreateContainer within sandbox \"a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 6 23:56:56.665172 containerd[1675]: time="2025-07-06T23:56:56.665128165Z" level=info msg="CreateContainer within sandbox \"a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8bc52bd9e088954c5324df2504eb08135133d2a51072b936cd24200649ff378f\"" Jul 6 23:56:56.667167 containerd[1675]: time="2025-07-06T23:56:56.665841482Z" level=info msg="StartContainer for \"8bc52bd9e088954c5324df2504eb08135133d2a51072b936cd24200649ff378f\"" Jul 6 23:56:56.705178 systemd[1]: Started cri-containerd-8bc52bd9e088954c5324df2504eb08135133d2a51072b936cd24200649ff378f.scope - libcontainer container 8bc52bd9e088954c5324df2504eb08135133d2a51072b936cd24200649ff378f. Jul 6 23:56:56.759843 containerd[1675]: time="2025-07-06T23:56:56.759523861Z" level=info msg="StartContainer for \"8bc52bd9e088954c5324df2504eb08135133d2a51072b936cd24200649ff378f\" returns successfully" Jul 6 23:56:56.763249 containerd[1675]: time="2025-07-06T23:56:56.763198051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 6 23:56:57.365318 containerd[1675]: time="2025-07-06T23:56:57.363952266Z" level=info msg="StopPodSandbox for \"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\"" Jul 6 23:56:57.442481 containerd[1675]: 2025-07-06 23:56:57.411 [INFO][4692] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Jul 6 23:56:57.442481 containerd[1675]: 2025-07-06 23:56:57.411 [INFO][4692] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" iface="eth0" netns="/var/run/netns/cni-5559dcba-46e0-b158-3d51-85aa3c1ab922" Jul 6 23:56:57.442481 containerd[1675]: 2025-07-06 23:56:57.412 [INFO][4692] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" iface="eth0" netns="/var/run/netns/cni-5559dcba-46e0-b158-3d51-85aa3c1ab922" Jul 6 23:56:57.442481 containerd[1675]: 2025-07-06 23:56:57.412 [INFO][4692] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" iface="eth0" netns="/var/run/netns/cni-5559dcba-46e0-b158-3d51-85aa3c1ab922" Jul 6 23:56:57.442481 containerd[1675]: 2025-07-06 23:56:57.412 [INFO][4692] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Jul 6 23:56:57.442481 containerd[1675]: 2025-07-06 23:56:57.412 [INFO][4692] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Jul 6 23:56:57.442481 containerd[1675]: 2025-07-06 23:56:57.432 [INFO][4699] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" HandleID="k8s-pod-network.0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0" Jul 6 23:56:57.442481 containerd[1675]: 2025-07-06 23:56:57.432 [INFO][4699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:57.442481 containerd[1675]: 2025-07-06 23:56:57.432 [INFO][4699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:57.442481 containerd[1675]: 2025-07-06 23:56:57.438 [WARNING][4699] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" HandleID="k8s-pod-network.0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0" Jul 6 23:56:57.442481 containerd[1675]: 2025-07-06 23:56:57.438 [INFO][4699] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" HandleID="k8s-pod-network.0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0" Jul 6 23:56:57.442481 containerd[1675]: 2025-07-06 23:56:57.440 [INFO][4699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:57.442481 containerd[1675]: 2025-07-06 23:56:57.441 [INFO][4692] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Jul 6 23:56:57.446060 containerd[1675]: time="2025-07-06T23:56:57.444105417Z" level=info msg="TearDown network for sandbox \"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\" successfully" Jul 6 23:56:57.446060 containerd[1675]: time="2025-07-06T23:56:57.444149818Z" level=info msg="StopPodSandbox for \"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\" returns successfully" Jul 6 23:56:57.446060 containerd[1675]: time="2025-07-06T23:56:57.444834634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4985b7cd-r7qrd,Uid:d596fb90-2a5b-4b75-b0f5-1553ebaf2652,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:56:57.447898 systemd[1]: run-netns-cni\x2d5559dcba\x2d46e0\x2db158\x2d3d51\x2d85aa3c1ab922.mount: Deactivated successfully. Jul 6 23:56:57.613683 systemd-networkd[1418]: caliadf61eb0654: Link UP Jul 6 23:56:57.615334 systemd-networkd[1418]: caliadf61eb0654: Gained carrier Jul 6 23:56:57.642512 containerd[1675]: 2025-07-06 23:56:57.547 [INFO][4706] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0 calico-apiserver-5b4985b7cd- calico-apiserver d596fb90-2a5b-4b75-b0f5-1553ebaf2652 916 0 2025-07-06 23:56:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b4985b7cd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-a-2f8c6d8615 calico-apiserver-5b4985b7cd-r7qrd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliadf61eb0654 [] [] }} ContainerID="127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644" Namespace="calico-apiserver" Pod="calico-apiserver-5b4985b7cd-r7qrd" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-" Jul 6 23:56:57.642512 containerd[1675]: 2025-07-06 23:56:57.547 [INFO][4706] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644" Namespace="calico-apiserver" Pod="calico-apiserver-5b4985b7cd-r7qrd" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0" Jul 6 23:56:57.642512 containerd[1675]: 2025-07-06 23:56:57.572 [INFO][4717] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644" HandleID="k8s-pod-network.127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0" Jul 6 23:56:57.642512 containerd[1675]: 2025-07-06 23:56:57.572 [INFO][4717] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644" HandleID="k8s-pod-network.127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f250), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-a-2f8c6d8615", "pod":"calico-apiserver-5b4985b7cd-r7qrd", "timestamp":"2025-07-06 23:56:57.572266535 +0000 UTC"}, Hostname:"ci-4081.3.4-a-2f8c6d8615", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:57.642512 containerd[1675]: 2025-07-06 23:56:57.572 [INFO][4717] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:57.642512 containerd[1675]: 2025-07-06 23:56:57.572 [INFO][4717] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:57.642512 containerd[1675]: 2025-07-06 23:56:57.572 [INFO][4717] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-2f8c6d8615' Jul 6 23:56:57.642512 containerd[1675]: 2025-07-06 23:56:57.579 [INFO][4717] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:57.642512 containerd[1675]: 2025-07-06 23:56:57.584 [INFO][4717] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:57.642512 containerd[1675]: 2025-07-06 23:56:57.587 [INFO][4717] ipam/ipam.go 511: Trying affinity for 192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:57.642512 containerd[1675]: 2025-07-06 23:56:57.589 [INFO][4717] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:57.642512 containerd[1675]: 2025-07-06 23:56:57.591 [INFO][4717] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:57.642512 containerd[1675]: 2025-07-06 23:56:57.591 [INFO][4717] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:57.642512 containerd[1675]: 2025-07-06 23:56:57.593 [INFO][4717] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644 Jul 6 23:56:57.642512 containerd[1675]: 2025-07-06 23:56:57.599 [INFO][4717] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:57.642512 containerd[1675]: 2025-07-06 23:56:57.608 [INFO][4717] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.72.130/26] block=192.168.72.128/26 handle="k8s-pod-network.127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:57.642512 containerd[1675]: 2025-07-06 23:56:57.608 [INFO][4717] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.130/26] handle="k8s-pod-network.127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:57.642512 containerd[1675]: 2025-07-06 23:56:57.608 [INFO][4717] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:57.642512 containerd[1675]: 2025-07-06 23:56:57.609 [INFO][4717] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.130/26] IPv6=[] ContainerID="127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644" HandleID="k8s-pod-network.127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0" Jul 6 23:56:57.646064 containerd[1675]: 2025-07-06 23:56:57.610 [INFO][4706] cni-plugin/k8s.go 418: Populated endpoint ContainerID="127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644" Namespace="calico-apiserver" Pod="calico-apiserver-5b4985b7cd-r7qrd" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0", GenerateName:"calico-apiserver-5b4985b7cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d596fb90-2a5b-4b75-b0f5-1553ebaf2652", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b4985b7cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"", Pod:"calico-apiserver-5b4985b7cd-r7qrd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadf61eb0654", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:57.646064 containerd[1675]: 2025-07-06 23:56:57.610 [INFO][4706] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.130/32] ContainerID="127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644" Namespace="calico-apiserver" Pod="calico-apiserver-5b4985b7cd-r7qrd" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0" Jul 6 23:56:57.646064 containerd[1675]: 2025-07-06 23:56:57.610 [INFO][4706] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliadf61eb0654 ContainerID="127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644" Namespace="calico-apiserver" Pod="calico-apiserver-5b4985b7cd-r7qrd" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0" Jul 6 23:56:57.646064 containerd[1675]: 2025-07-06 23:56:57.615 [INFO][4706] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644" Namespace="calico-apiserver" Pod="calico-apiserver-5b4985b7cd-r7qrd" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0" Jul 6 23:56:57.646064 containerd[1675]: 2025-07-06 23:56:57.618 [INFO][4706] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644" Namespace="calico-apiserver" Pod="calico-apiserver-5b4985b7cd-r7qrd" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0", GenerateName:"calico-apiserver-5b4985b7cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d596fb90-2a5b-4b75-b0f5-1553ebaf2652", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b4985b7cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644", Pod:"calico-apiserver-5b4985b7cd-r7qrd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadf61eb0654", MAC:"9a:45:37:48:a9:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:57.646064 containerd[1675]: 2025-07-06 23:56:57.639 [INFO][4706] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644" Namespace="calico-apiserver" Pod="calico-apiserver-5b4985b7cd-r7qrd" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0" Jul 6 23:56:57.668706 systemd-networkd[1418]: vxlan.calico: Gained IPv6LL Jul 6 23:56:57.679448 containerd[1675]: time="2025-07-06T23:56:57.679196036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:57.679448 containerd[1675]: time="2025-07-06T23:56:57.679269538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:57.679448 containerd[1675]: time="2025-07-06T23:56:57.679318639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:57.679741 containerd[1675]: time="2025-07-06T23:56:57.679566445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:57.710193 systemd[1]: Started cri-containerd-127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644.scope - libcontainer container 127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644. Jul 6 23:56:57.753931 containerd[1675]: time="2025-07-06T23:56:57.753782151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4985b7cd-r7qrd,Uid:d596fb90-2a5b-4b75-b0f5-1553ebaf2652,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644\"" Jul 6 23:56:58.365817 containerd[1675]: time="2025-07-06T23:56:58.365441232Z" level=info msg="StopPodSandbox for \"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\"" Jul 6 23:56:58.368100 containerd[1675]: time="2025-07-06T23:56:58.367866591Z" level=info msg="StopPodSandbox for \"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\"" Jul 6 23:56:58.570128 containerd[1675]: 2025-07-06 23:56:58.499 [INFO][4795] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Jul 6 23:56:58.570128 containerd[1675]: 2025-07-06 23:56:58.500 [INFO][4795] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" iface="eth0" netns="/var/run/netns/cni-db0e9627-2275-9884-1583-40d1bf54bfd7" Jul 6 23:56:58.570128 containerd[1675]: 2025-07-06 23:56:58.500 [INFO][4795] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" iface="eth0" netns="/var/run/netns/cni-db0e9627-2275-9884-1583-40d1bf54bfd7" Jul 6 23:56:58.570128 containerd[1675]: 2025-07-06 23:56:58.503 [INFO][4795] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" iface="eth0" netns="/var/run/netns/cni-db0e9627-2275-9884-1583-40d1bf54bfd7" Jul 6 23:56:58.570128 containerd[1675]: 2025-07-06 23:56:58.503 [INFO][4795] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Jul 6 23:56:58.570128 containerd[1675]: 2025-07-06 23:56:58.503 [INFO][4795] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Jul 6 23:56:58.570128 containerd[1675]: 2025-07-06 23:56:58.549 [INFO][4811] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" HandleID="k8s-pod-network.c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0" Jul 6 23:56:58.570128 containerd[1675]: 2025-07-06 23:56:58.550 [INFO][4811] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:58.570128 containerd[1675]: 2025-07-06 23:56:58.550 [INFO][4811] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:58.570128 containerd[1675]: 2025-07-06 23:56:58.558 [WARNING][4811] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" HandleID="k8s-pod-network.c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0" Jul 6 23:56:58.570128 containerd[1675]: 2025-07-06 23:56:58.558 [INFO][4811] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" HandleID="k8s-pod-network.c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0" Jul 6 23:56:58.570128 containerd[1675]: 2025-07-06 23:56:58.562 [INFO][4811] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:58.570128 containerd[1675]: 2025-07-06 23:56:58.566 [INFO][4795] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Jul 6 23:56:58.573491 containerd[1675]: time="2025-07-06T23:56:58.571383443Z" level=info msg="TearDown network for sandbox \"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\" successfully" Jul 6 23:56:58.573491 containerd[1675]: time="2025-07-06T23:56:58.571419743Z" level=info msg="StopPodSandbox for \"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\" returns successfully" Jul 6 23:56:58.573491 containerd[1675]: time="2025-07-06T23:56:58.572329266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4985b7cd-b2wsx,Uid:315fadc6-402d-4c42-a716-cdde0ac33312,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:56:58.576996 systemd[1]: run-netns-cni\x2ddb0e9627\x2d2275\x2d9884\x2d1583\x2d40d1bf54bfd7.mount: Deactivated successfully. Jul 6 23:56:58.596487 containerd[1675]: 2025-07-06 23:56:58.495 [INFO][4794] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Jul 6 23:56:58.596487 containerd[1675]: 2025-07-06 23:56:58.498 [INFO][4794] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" iface="eth0" netns="/var/run/netns/cni-90471115-9063-6bcf-ea83-ff1151c0c634" Jul 6 23:56:58.596487 containerd[1675]: 2025-07-06 23:56:58.498 [INFO][4794] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" iface="eth0" netns="/var/run/netns/cni-90471115-9063-6bcf-ea83-ff1151c0c634" Jul 6 23:56:58.596487 containerd[1675]: 2025-07-06 23:56:58.499 [INFO][4794] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" iface="eth0" netns="/var/run/netns/cni-90471115-9063-6bcf-ea83-ff1151c0c634" Jul 6 23:56:58.596487 containerd[1675]: 2025-07-06 23:56:58.499 [INFO][4794] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Jul 6 23:56:58.596487 containerd[1675]: 2025-07-06 23:56:58.499 [INFO][4794] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Jul 6 23:56:58.596487 containerd[1675]: 2025-07-06 23:56:58.581 [INFO][4809] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" HandleID="k8s-pod-network.4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0" Jul 6 23:56:58.596487 containerd[1675]: 2025-07-06 23:56:58.582 [INFO][4809] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:58.596487 containerd[1675]: 2025-07-06 23:56:58.582 [INFO][4809] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:58.596487 containerd[1675]: 2025-07-06 23:56:58.589 [WARNING][4809] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" HandleID="k8s-pod-network.4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0" Jul 6 23:56:58.596487 containerd[1675]: 2025-07-06 23:56:58.589 [INFO][4809] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" HandleID="k8s-pod-network.4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0" Jul 6 23:56:58.596487 containerd[1675]: 2025-07-06 23:56:58.591 [INFO][4809] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:58.596487 containerd[1675]: 2025-07-06 23:56:58.593 [INFO][4794] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Jul 6 23:56:58.599820 containerd[1675]: time="2025-07-06T23:56:58.596687758Z" level=info msg="TearDown network for sandbox \"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\" successfully" Jul 6 23:56:58.599820 containerd[1675]: time="2025-07-06T23:56:58.596719259Z" level=info msg="StopPodSandbox for \"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\" returns successfully" Jul 6 23:56:58.599820 containerd[1675]: time="2025-07-06T23:56:58.597672682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68c8fc6bd-5mrzx,Uid:3050acde-8e24-48b0-af1c-c0021f4ca060,Namespace:calico-system,Attempt:1,}" Jul 6 23:56:58.658654 systemd[1]: run-netns-cni\x2d90471115\x2d9063\x2d6bcf\x2dea83\x2dff1151c0c634.mount: Deactivated successfully. Jul 6 23:56:58.893681 systemd-networkd[1418]: cali11c8c76ac1e: Link UP Jul 6 23:56:58.896736 systemd-networkd[1418]: cali11c8c76ac1e: Gained carrier Jul 6 23:56:58.927801 containerd[1675]: 2025-07-06 23:56:58.752 [INFO][4823] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0 calico-apiserver-5b4985b7cd- calico-apiserver 315fadc6-402d-4c42-a716-cdde0ac33312 926 0 2025-07-06 23:56:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b4985b7cd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-a-2f8c6d8615 calico-apiserver-5b4985b7cd-b2wsx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali11c8c76ac1e [] [] }} ContainerID="a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6" Namespace="calico-apiserver" Pod="calico-apiserver-5b4985b7cd-b2wsx" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-" Jul 6 23:56:58.927801 containerd[1675]: 2025-07-06 23:56:58.754 [INFO][4823] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6" Namespace="calico-apiserver" Pod="calico-apiserver-5b4985b7cd-b2wsx" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0" Jul 6 23:56:58.927801 containerd[1675]: 2025-07-06 23:56:58.814 [INFO][4847] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6" HandleID="k8s-pod-network.a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0" Jul 6 23:56:58.927801 containerd[1675]: 2025-07-06 23:56:58.814 [INFO][4847] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6" HandleID="k8s-pod-network.a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5440), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-a-2f8c6d8615", "pod":"calico-apiserver-5b4985b7cd-b2wsx", "timestamp":"2025-07-06 23:56:58.814092848 +0000 UTC"}, Hostname:"ci-4081.3.4-a-2f8c6d8615", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:58.927801 containerd[1675]: 2025-07-06 23:56:58.814 [INFO][4847] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:58.927801 containerd[1675]: 2025-07-06 23:56:58.814 [INFO][4847] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:58.927801 containerd[1675]: 2025-07-06 23:56:58.814 [INFO][4847] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-2f8c6d8615' Jul 6 23:56:58.927801 containerd[1675]: 2025-07-06 23:56:58.825 [INFO][4847] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:58.927801 containerd[1675]: 2025-07-06 23:56:58.840 [INFO][4847] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:58.927801 containerd[1675]: 2025-07-06 23:56:58.848 [INFO][4847] ipam/ipam.go 511: Trying affinity for 192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:58.927801 containerd[1675]: 2025-07-06 23:56:58.852 [INFO][4847] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:58.927801 containerd[1675]: 2025-07-06 23:56:58.856 [INFO][4847] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:58.927801 containerd[1675]: 2025-07-06 23:56:58.856 [INFO][4847] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:58.927801 containerd[1675]: 2025-07-06 23:56:58.859 [INFO][4847] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6 Jul 6 23:56:58.927801 containerd[1675]: 2025-07-06 23:56:58.867 [INFO][4847] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:58.927801 containerd[1675]: 2025-07-06 23:56:58.878 [INFO][4847] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.72.131/26] block=192.168.72.128/26 handle="k8s-pod-network.a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:58.927801 containerd[1675]: 2025-07-06 23:56:58.878 [INFO][4847] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.131/26] handle="k8s-pod-network.a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:58.927801 containerd[1675]: 2025-07-06 23:56:58.878 [INFO][4847] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:58.927801 containerd[1675]: 2025-07-06 23:56:58.879 [INFO][4847] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.131/26] IPv6=[] ContainerID="a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6" HandleID="k8s-pod-network.a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0" Jul 6 23:56:58.929592 containerd[1675]: 2025-07-06 23:56:58.886 [INFO][4823] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6" Namespace="calico-apiserver" Pod="calico-apiserver-5b4985b7cd-b2wsx" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0", GenerateName:"calico-apiserver-5b4985b7cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"315fadc6-402d-4c42-a716-cdde0ac33312", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b4985b7cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"", Pod:"calico-apiserver-5b4985b7cd-b2wsx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali11c8c76ac1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:58.929592 containerd[1675]: 2025-07-06 23:56:58.886 [INFO][4823] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.131/32] ContainerID="a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6" Namespace="calico-apiserver" Pod="calico-apiserver-5b4985b7cd-b2wsx" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0" Jul 6 23:56:58.929592 containerd[1675]: 2025-07-06 23:56:58.886 [INFO][4823] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11c8c76ac1e ContainerID="a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6" Namespace="calico-apiserver" Pod="calico-apiserver-5b4985b7cd-b2wsx" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0" Jul 6 23:56:58.929592 containerd[1675]: 2025-07-06 23:56:58.900 [INFO][4823] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6" Namespace="calico-apiserver" Pod="calico-apiserver-5b4985b7cd-b2wsx" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0" Jul 6 23:56:58.929592 containerd[1675]: 2025-07-06 23:56:58.900 [INFO][4823] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6" Namespace="calico-apiserver" Pod="calico-apiserver-5b4985b7cd-b2wsx" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0", GenerateName:"calico-apiserver-5b4985b7cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"315fadc6-402d-4c42-a716-cdde0ac33312", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b4985b7cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6", Pod:"calico-apiserver-5b4985b7cd-b2wsx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali11c8c76ac1e", MAC:"ba:07:e5:8f:a7:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:58.929592 containerd[1675]: 2025-07-06 23:56:58.919 [INFO][4823] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6" Namespace="calico-apiserver" Pod="calico-apiserver-5b4985b7cd-b2wsx" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0" Jul 6 23:56:58.993472 systemd-networkd[1418]: calie94fd8cc1a7: Link UP Jul 6 23:56:58.993766 systemd-networkd[1418]: calie94fd8cc1a7: Gained carrier Jul 6 23:56:59.028046 containerd[1675]: time="2025-07-06T23:56:59.027038628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:59.028046 containerd[1675]: time="2025-07-06T23:56:59.027109730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:59.028046 containerd[1675]: time="2025-07-06T23:56:59.027125030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:59.028710 containerd[1675]: time="2025-07-06T23:56:59.027267334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:59.036279 containerd[1675]: 2025-07-06 23:56:58.787 [INFO][4832] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0 calico-kube-controllers-68c8fc6bd- calico-system 3050acde-8e24-48b0-af1c-c0021f4ca060 925 0 2025-07-06 23:56:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68c8fc6bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.4-a-2f8c6d8615 calico-kube-controllers-68c8fc6bd-5mrzx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie94fd8cc1a7 [] [] }} ContainerID="f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2" Namespace="calico-system" Pod="calico-kube-controllers-68c8fc6bd-5mrzx" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-" Jul 6 23:56:59.036279 containerd[1675]: 2025-07-06 23:56:58.787 [INFO][4832] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2" Namespace="calico-system" Pod="calico-kube-controllers-68c8fc6bd-5mrzx" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0" Jul 6 23:56:59.036279 containerd[1675]: 2025-07-06 23:56:58.867 [INFO][4853] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2" HandleID="k8s-pod-network.f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0" Jul 6 23:56:59.036279 containerd[1675]: 2025-07-06 23:56:58.867 [INFO][4853] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2" HandleID="k8s-pod-network.f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031fa20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-2f8c6d8615", "pod":"calico-kube-controllers-68c8fc6bd-5mrzx", "timestamp":"2025-07-06 23:56:58.867653651 +0000 UTC"}, Hostname:"ci-4081.3.4-a-2f8c6d8615", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:59.036279 containerd[1675]: 2025-07-06 23:56:58.868 [INFO][4853] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:59.036279 containerd[1675]: 2025-07-06 23:56:58.879 [INFO][4853] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:59.036279 containerd[1675]: 2025-07-06 23:56:58.879 [INFO][4853] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-2f8c6d8615' Jul 6 23:56:59.036279 containerd[1675]: 2025-07-06 23:56:58.928 [INFO][4853] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:59.036279 containerd[1675]: 2025-07-06 23:56:58.938 [INFO][4853] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:59.036279 containerd[1675]: 2025-07-06 23:56:58.948 [INFO][4853] ipam/ipam.go 511: Trying affinity for 192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:59.036279 containerd[1675]: 2025-07-06 23:56:58.952 [INFO][4853] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:59.036279 containerd[1675]: 2025-07-06 23:56:58.956 [INFO][4853] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:59.036279 containerd[1675]: 2025-07-06 23:56:58.956 [INFO][4853] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:59.036279 containerd[1675]: 2025-07-06 23:56:58.959 [INFO][4853] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2 Jul 6 23:56:59.036279 containerd[1675]: 2025-07-06 23:56:58.967 [INFO][4853] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:59.036279 containerd[1675]: 2025-07-06 23:56:58.982 [INFO][4853] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.72.132/26] block=192.168.72.128/26 handle="k8s-pod-network.f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:59.036279 containerd[1675]: 2025-07-06 23:56:58.982 [INFO][4853] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.132/26] handle="k8s-pod-network.f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:59.036279 containerd[1675]: 2025-07-06 23:56:58.982 [INFO][4853] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:59.036279 containerd[1675]: 2025-07-06 23:56:58.983 [INFO][4853] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.132/26] IPv6=[] ContainerID="f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2" HandleID="k8s-pod-network.f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0" Jul 6 23:56:59.037865 containerd[1675]: 2025-07-06 23:56:58.987 [INFO][4832] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2" Namespace="calico-system" Pod="calico-kube-controllers-68c8fc6bd-5mrzx" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0", GenerateName:"calico-kube-controllers-68c8fc6bd-", Namespace:"calico-system", SelfLink:"", UID:"3050acde-8e24-48b0-af1c-c0021f4ca060", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68c8fc6bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"", Pod:"calico-kube-controllers-68c8fc6bd-5mrzx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie94fd8cc1a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:59.037865 containerd[1675]: 2025-07-06 23:56:58.987 [INFO][4832] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.132/32] ContainerID="f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2" Namespace="calico-system" Pod="calico-kube-controllers-68c8fc6bd-5mrzx" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0" Jul 6 23:56:59.037865 containerd[1675]: 2025-07-06 23:56:58.987 [INFO][4832] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie94fd8cc1a7 ContainerID="f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2" Namespace="calico-system" Pod="calico-kube-controllers-68c8fc6bd-5mrzx" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0" Jul 6 23:56:59.037865 containerd[1675]: 2025-07-06 23:56:58.997 [INFO][4832] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2" Namespace="calico-system" Pod="calico-kube-controllers-68c8fc6bd-5mrzx" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0" Jul 6 23:56:59.037865 containerd[1675]: 2025-07-06 23:56:59.000 [INFO][4832] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2" Namespace="calico-system" Pod="calico-kube-controllers-68c8fc6bd-5mrzx" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0", GenerateName:"calico-kube-controllers-68c8fc6bd-", Namespace:"calico-system", SelfLink:"", UID:"3050acde-8e24-48b0-af1c-c0021f4ca060", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68c8fc6bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2", Pod:"calico-kube-controllers-68c8fc6bd-5mrzx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie94fd8cc1a7", MAC:"86:f2:52:84:bc:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:59.037865 containerd[1675]: 2025-07-06 23:56:59.029 [INFO][4832] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2" Namespace="calico-system" Pod="calico-kube-controllers-68c8fc6bd-5mrzx" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0" Jul 6 23:56:59.081369 systemd[1]: Started cri-containerd-a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6.scope - libcontainer container a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6. Jul 6 23:56:59.144382 containerd[1675]: time="2025-07-06T23:56:59.144014874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:59.144382 containerd[1675]: time="2025-07-06T23:56:59.144102276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:59.144382 containerd[1675]: time="2025-07-06T23:56:59.144149478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:59.144382 containerd[1675]: time="2025-07-06T23:56:59.144273981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:59.179354 systemd[1]: Started cri-containerd-f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2.scope - libcontainer container f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2. Jul 6 23:56:59.188435 containerd[1675]: time="2025-07-06T23:56:59.188366053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4985b7cd-b2wsx,Uid:315fadc6-402d-4c42-a716-cdde0ac33312,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6\"" Jul 6 23:56:59.244444 containerd[1675]: time="2025-07-06T23:56:59.244124510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68c8fc6bd-5mrzx,Uid:3050acde-8e24-48b0-af1c-c0021f4ca060,Namespace:calico-system,Attempt:1,} returns sandbox id \"f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2\"" Jul 6 23:56:59.367832 containerd[1675]: time="2025-07-06T23:56:59.367772918Z" level=info msg="StopPodSandbox for \"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\"" Jul 6 23:56:59.371903 containerd[1675]: time="2025-07-06T23:56:59.371864118Z" level=info msg="StopPodSandbox for \"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\"" Jul 6 23:56:59.382828 containerd[1675]: time="2025-07-06T23:56:59.382770283Z" level=info msg="StopPodSandbox for \"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\"" Jul 6 23:56:59.385101 containerd[1675]: time="2025-07-06T23:56:59.384991837Z" level=info msg="StopPodSandbox for \"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\"" Jul 6 23:56:59.565229 containerd[1675]: 2025-07-06 23:56:59.493 [INFO][4998] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Jul 6 23:56:59.565229 containerd[1675]: 2025-07-06 23:56:59.495 [INFO][4998] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" iface="eth0" netns="/var/run/netns/cni-8cf13edf-d6b2-9500-6428-ceecfdb9e58e" Jul 6 23:56:59.565229 containerd[1675]: 2025-07-06 23:56:59.495 [INFO][4998] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" iface="eth0" netns="/var/run/netns/cni-8cf13edf-d6b2-9500-6428-ceecfdb9e58e" Jul 6 23:56:59.565229 containerd[1675]: 2025-07-06 23:56:59.496 [INFO][4998] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" iface="eth0" netns="/var/run/netns/cni-8cf13edf-d6b2-9500-6428-ceecfdb9e58e" Jul 6 23:56:59.565229 containerd[1675]: 2025-07-06 23:56:59.496 [INFO][4998] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Jul 6 23:56:59.565229 containerd[1675]: 2025-07-06 23:56:59.496 [INFO][4998] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Jul 6 23:56:59.565229 containerd[1675]: 2025-07-06 23:56:59.540 [INFO][5018] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" HandleID="k8s-pod-network.18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0" Jul 6 23:56:59.565229 containerd[1675]: 2025-07-06 23:56:59.540 [INFO][5018] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:59.565229 containerd[1675]: 2025-07-06 23:56:59.540 [INFO][5018] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:59.565229 containerd[1675]: 2025-07-06 23:56:59.558 [WARNING][5018] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" HandleID="k8s-pod-network.18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0" Jul 6 23:56:59.565229 containerd[1675]: 2025-07-06 23:56:59.558 [INFO][5018] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" HandleID="k8s-pod-network.18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0" Jul 6 23:56:59.565229 containerd[1675]: 2025-07-06 23:56:59.559 [INFO][5018] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:59.565229 containerd[1675]: 2025-07-06 23:56:59.562 [INFO][4998] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Jul 6 23:56:59.568093 containerd[1675]: time="2025-07-06T23:56:59.567733083Z" level=info msg="TearDown network for sandbox \"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\" successfully" Jul 6 23:56:59.569269 containerd[1675]: time="2025-07-06T23:56:59.568180394Z" level=info msg="StopPodSandbox for \"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\" returns successfully" Jul 6 23:56:59.570285 containerd[1675]: time="2025-07-06T23:56:59.569930837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ldbbw,Uid:80d646f2-c2b8-4ec5-90f1-97a890b8837a,Namespace:calico-system,Attempt:1,}" Jul 6 23:56:59.640495 containerd[1675]: 2025-07-06 23:56:59.556 [INFO][4999] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Jul 6 23:56:59.640495 containerd[1675]: 2025-07-06 23:56:59.557 [INFO][4999] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" iface="eth0" netns="/var/run/netns/cni-bd5ac41a-af48-059d-7304-639c2f2f5d36" Jul 6 23:56:59.640495 containerd[1675]: 2025-07-06 23:56:59.557 [INFO][4999] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" iface="eth0" netns="/var/run/netns/cni-bd5ac41a-af48-059d-7304-639c2f2f5d36" Jul 6 23:56:59.640495 containerd[1675]: 2025-07-06 23:56:59.559 [INFO][4999] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" iface="eth0" netns="/var/run/netns/cni-bd5ac41a-af48-059d-7304-639c2f2f5d36" Jul 6 23:56:59.640495 containerd[1675]: 2025-07-06 23:56:59.559 [INFO][4999] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Jul 6 23:56:59.640495 containerd[1675]: 2025-07-06 23:56:59.560 [INFO][4999] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Jul 6 23:56:59.640495 containerd[1675]: 2025-07-06 23:56:59.618 [INFO][5028] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" HandleID="k8s-pod-network.70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0" Jul 6 23:56:59.640495 containerd[1675]: 2025-07-06 23:56:59.619 [INFO][5028] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:59.640495 containerd[1675]: 2025-07-06 23:56:59.619 [INFO][5028] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:59.640495 containerd[1675]: 2025-07-06 23:56:59.630 [WARNING][5028] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" HandleID="k8s-pod-network.70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0" Jul 6 23:56:59.640495 containerd[1675]: 2025-07-06 23:56:59.630 [INFO][5028] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" HandleID="k8s-pod-network.70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0" Jul 6 23:56:59.640495 containerd[1675]: 2025-07-06 23:56:59.632 [INFO][5028] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:59.640495 containerd[1675]: 2025-07-06 23:56:59.634 [INFO][4999] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Jul 6 23:56:59.643126 containerd[1675]: time="2025-07-06T23:56:59.641867487Z" level=info msg="TearDown network for sandbox \"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\" successfully" Jul 6 23:56:59.643126 containerd[1675]: time="2025-07-06T23:56:59.641902088Z" level=info msg="StopPodSandbox for \"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\" returns successfully" Jul 6 23:56:59.643126 containerd[1675]: time="2025-07-06T23:56:59.642801510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rv7bm,Uid:668cd08b-4d24-45a3-a679-683237a42032,Namespace:kube-system,Attempt:1,}" Jul 6 23:56:59.645832 containerd[1675]: time="2025-07-06T23:56:59.645794582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:59.652254 systemd-networkd[1418]: caliadf61eb0654: Gained IPv6LL Jul 6 23:56:59.658633 containerd[1675]: time="2025-07-06T23:56:59.658587694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 6 23:56:59.660268 systemd[1]: run-containerd-runc-k8s.io-a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6-runc.kLBFVy.mount: Deactivated successfully. Jul 6 23:56:59.660386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount852105640.mount: Deactivated successfully. Jul 6 23:56:59.660482 systemd[1]: run-netns-cni\x2d8cf13edf\x2dd6b2\x2d9500\x2d6428\x2dceecfdb9e58e.mount: Deactivated successfully. Jul 6 23:56:59.660573 systemd[1]: run-netns-cni\x2dbd5ac41a\x2daf48\x2d059d\x2d7304\x2d639c2f2f5d36.mount: Deactivated successfully. Jul 6 23:56:59.673497 containerd[1675]: time="2025-07-06T23:56:59.673270251Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:59.689077 containerd[1675]: time="2025-07-06T23:56:59.688202214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:59.689077 containerd[1675]: time="2025-07-06T23:56:59.688919832Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.92567878s" Jul 6 23:56:59.689077 containerd[1675]: time="2025-07-06T23:56:59.688959533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 6 23:56:59.695152 containerd[1675]: time="2025-07-06T23:56:59.695105082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:56:59.697461 containerd[1675]: time="2025-07-06T23:56:59.697421338Z" level=info msg="CreateContainer within sandbox \"a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 6 23:56:59.737490 containerd[1675]: 2025-07-06 23:56:59.597 [INFO][4979] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Jul 6 23:56:59.737490 containerd[1675]: 2025-07-06 23:56:59.598 [INFO][4979] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" iface="eth0" netns="/var/run/netns/cni-926e9d43-634d-d64a-4621-648c069aa319" Jul 6 23:56:59.737490 containerd[1675]: 2025-07-06 23:56:59.599 [INFO][4979] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" iface="eth0" netns="/var/run/netns/cni-926e9d43-634d-d64a-4621-648c069aa319" Jul 6 23:56:59.737490 containerd[1675]: 2025-07-06 23:56:59.602 [INFO][4979] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" iface="eth0" netns="/var/run/netns/cni-926e9d43-634d-d64a-4621-648c069aa319" Jul 6 23:56:59.737490 containerd[1675]: 2025-07-06 23:56:59.602 [INFO][4979] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Jul 6 23:56:59.737490 containerd[1675]: 2025-07-06 23:56:59.603 [INFO][4979] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Jul 6 23:56:59.737490 containerd[1675]: 2025-07-06 23:56:59.685 [INFO][5043] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" HandleID="k8s-pod-network.74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0" Jul 6 23:56:59.737490 containerd[1675]: 2025-07-06 23:56:59.687 [INFO][5043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:59.737490 containerd[1675]: 2025-07-06 23:56:59.689 [INFO][5043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:59.737490 containerd[1675]: 2025-07-06 23:56:59.721 [WARNING][5043] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" HandleID="k8s-pod-network.74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0" Jul 6 23:56:59.737490 containerd[1675]: 2025-07-06 23:56:59.721 [INFO][5043] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" HandleID="k8s-pod-network.74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0" Jul 6 23:56:59.737490 containerd[1675]: 2025-07-06 23:56:59.723 [INFO][5043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:59.737490 containerd[1675]: 2025-07-06 23:56:59.728 [INFO][4979] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Jul 6 23:56:59.739400 containerd[1675]: time="2025-07-06T23:56:59.738471537Z" level=info msg="TearDown network for sandbox \"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\" successfully" Jul 6 23:56:59.739400 containerd[1675]: time="2025-07-06T23:56:59.738510738Z" level=info msg="StopPodSandbox for \"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\" returns successfully" Jul 6 23:56:59.742905 containerd[1675]: time="2025-07-06T23:56:59.741242605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bz9rt,Uid:8f8547f6-c7c2-4c77-af76-00fb7e939448,Namespace:kube-system,Attempt:1,}" Jul 6 23:56:59.745197 systemd[1]: run-netns-cni\x2d926e9d43\x2d634d\x2dd64a\x2d4621\x2d648c069aa319.mount: Deactivated successfully. Jul 6 23:56:59.755662 containerd[1675]: 2025-07-06 23:56:59.593 [INFO][5003] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Jul 6 23:56:59.755662 containerd[1675]: 2025-07-06 23:56:59.595 [INFO][5003] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" iface="eth0" netns="/var/run/netns/cni-e4bf28ae-c8e2-c3b4-03f8-12456489e767" Jul 6 23:56:59.755662 containerd[1675]: 2025-07-06 23:56:59.596 [INFO][5003] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" iface="eth0" netns="/var/run/netns/cni-e4bf28ae-c8e2-c3b4-03f8-12456489e767" Jul 6 23:56:59.755662 containerd[1675]: 2025-07-06 23:56:59.597 [INFO][5003] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" iface="eth0" netns="/var/run/netns/cni-e4bf28ae-c8e2-c3b4-03f8-12456489e767" Jul 6 23:56:59.755662 containerd[1675]: 2025-07-06 23:56:59.597 [INFO][5003] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Jul 6 23:56:59.755662 containerd[1675]: 2025-07-06 23:56:59.597 [INFO][5003] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Jul 6 23:56:59.755662 containerd[1675]: 2025-07-06 23:56:59.700 [INFO][5038] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" HandleID="k8s-pod-network.7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0" Jul 6 23:56:59.755662 containerd[1675]: 2025-07-06 23:56:59.704 [INFO][5038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:59.755662 containerd[1675]: 2025-07-06 23:56:59.723 [INFO][5038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:59.755662 containerd[1675]: 2025-07-06 23:56:59.734 [WARNING][5038] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" HandleID="k8s-pod-network.7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0" Jul 6 23:56:59.755662 containerd[1675]: 2025-07-06 23:56:59.734 [INFO][5038] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" HandleID="k8s-pod-network.7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0" Jul 6 23:56:59.755662 containerd[1675]: 2025-07-06 23:56:59.745 [INFO][5038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:59.755662 containerd[1675]: 2025-07-06 23:56:59.751 [INFO][5003] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Jul 6 23:56:59.756267 containerd[1675]: time="2025-07-06T23:56:59.754881336Z" level=info msg="TearDown network for sandbox \"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\" successfully" Jul 6 23:56:59.756267 containerd[1675]: time="2025-07-06T23:56:59.756121567Z" level=info msg="StopPodSandbox for \"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\" returns successfully" Jul 6 23:56:59.761832 containerd[1675]: time="2025-07-06T23:56:59.760191866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8nwl8,Uid:f7ba973d-dc0d-426a-8adc-f92cde7b6fed,Namespace:calico-system,Attempt:1,}" Jul 6 23:56:59.761316 systemd[1]: run-netns-cni\x2de4bf28ae\x2dc8e2\x2dc3b4\x2d03f8\x2d12456489e767.mount: Deactivated successfully. Jul 6 23:56:59.818720 containerd[1675]: time="2025-07-06T23:56:59.816580037Z" level=info msg="CreateContainer within sandbox \"a2f5b05ca18074b903832576085962699070f0ff5e8b07c1dd53dc9198e918d1\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"24bfbea4e45888031957addb5c3288a1ae3f25fc15a8a7a105b320dee9496c1b\"" Jul 6 23:56:59.821708 containerd[1675]: time="2025-07-06T23:56:59.820172825Z" level=info msg="StartContainer for \"24bfbea4e45888031957addb5c3288a1ae3f25fc15a8a7a105b320dee9496c1b\"" Jul 6 23:56:59.921195 systemd-networkd[1418]: cali4875da9ee3c: Link UP Jul 6 23:56:59.923612 systemd-networkd[1418]: cali4875da9ee3c: Gained carrier Jul 6 23:56:59.947101 systemd[1]: Started cri-containerd-24bfbea4e45888031957addb5c3288a1ae3f25fc15a8a7a105b320dee9496c1b.scope - libcontainer container 24bfbea4e45888031957addb5c3288a1ae3f25fc15a8a7a105b320dee9496c1b. Jul 6 23:56:59.975316 containerd[1675]: 2025-07-06 23:56:59.760 [INFO][5048] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0 csi-node-driver- calico-system 80d646f2-c2b8-4ec5-90f1-97a890b8837a 942 0 2025-07-06 23:56:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.4-a-2f8c6d8615 csi-node-driver-ldbbw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4875da9ee3c [] [] }} ContainerID="beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f" Namespace="calico-system" Pod="csi-node-driver-ldbbw" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-" Jul 6 23:56:59.975316 containerd[1675]: 2025-07-06 23:56:59.762 [INFO][5048] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f" Namespace="calico-system" Pod="csi-node-driver-ldbbw" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0" Jul 6 23:56:59.975316 containerd[1675]: 2025-07-06 23:56:59.813 [INFO][5078] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f" HandleID="k8s-pod-network.beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0" Jul 6 23:56:59.975316 containerd[1675]: 2025-07-06 23:56:59.815 [INFO][5078] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f" HandleID="k8s-pod-network.beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5730), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-2f8c6d8615", "pod":"csi-node-driver-ldbbw", "timestamp":"2025-07-06 23:56:59.813888072 +0000 UTC"}, Hostname:"ci-4081.3.4-a-2f8c6d8615", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:59.975316 containerd[1675]: 2025-07-06 23:56:59.815 [INFO][5078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:59.975316 containerd[1675]: 2025-07-06 23:56:59.815 [INFO][5078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:59.975316 containerd[1675]: 2025-07-06 23:56:59.815 [INFO][5078] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-2f8c6d8615' Jul 6 23:56:59.975316 containerd[1675]: 2025-07-06 23:56:59.839 [INFO][5078] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:59.975316 containerd[1675]: 2025-07-06 23:56:59.851 [INFO][5078] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:59.975316 containerd[1675]: 2025-07-06 23:56:59.861 [INFO][5078] ipam/ipam.go 511: Trying affinity for 192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:59.975316 containerd[1675]: 2025-07-06 23:56:59.865 [INFO][5078] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:59.975316 containerd[1675]: 2025-07-06 23:56:59.869 [INFO][5078] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:59.975316 containerd[1675]: 2025-07-06 23:56:59.869 [INFO][5078] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:59.975316 containerd[1675]: 2025-07-06 23:56:59.873 [INFO][5078] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f Jul 6 23:56:59.975316 containerd[1675]: 2025-07-06 23:56:59.885 [INFO][5078] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:59.975316 containerd[1675]: 2025-07-06 23:56:59.904 [INFO][5078] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.72.133/26] block=192.168.72.128/26 handle="k8s-pod-network.beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:59.975316 containerd[1675]: 2025-07-06 23:56:59.904 [INFO][5078] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.133/26] handle="k8s-pod-network.beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:56:59.975316 containerd[1675]: 2025-07-06 23:56:59.904 [INFO][5078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:59.975316 containerd[1675]: 2025-07-06 23:56:59.905 [INFO][5078] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.133/26] IPv6=[] ContainerID="beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f" HandleID="k8s-pod-network.beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0" Jul 6 23:56:59.976594 containerd[1675]: 2025-07-06 23:56:59.912 [INFO][5048] cni-plugin/k8s.go 418: Populated endpoint ContainerID="beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f" Namespace="calico-system" Pod="csi-node-driver-ldbbw" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"80d646f2-c2b8-4ec5-90f1-97a890b8837a", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"", Pod:"csi-node-driver-ldbbw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4875da9ee3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:59.976594 containerd[1675]: 2025-07-06 23:56:59.912 [INFO][5048] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.133/32] ContainerID="beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f" Namespace="calico-system" Pod="csi-node-driver-ldbbw" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0" Jul 6 23:56:59.976594 containerd[1675]: 2025-07-06 23:56:59.912 [INFO][5048] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4875da9ee3c ContainerID="beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f" Namespace="calico-system" Pod="csi-node-driver-ldbbw" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0" Jul 6 23:56:59.976594 containerd[1675]: 2025-07-06 23:56:59.925 [INFO][5048] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f" Namespace="calico-system" Pod="csi-node-driver-ldbbw" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0" Jul 6 23:56:59.976594 containerd[1675]: 2025-07-06 23:56:59.928 [INFO][5048] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f" Namespace="calico-system" Pod="csi-node-driver-ldbbw" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"80d646f2-c2b8-4ec5-90f1-97a890b8837a", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f", Pod:"csi-node-driver-ldbbw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4875da9ee3c", MAC:"26:33:00:a6:b2:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:59.976594 containerd[1675]: 2025-07-06 23:56:59.962 [INFO][5048] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f" Namespace="calico-system" Pod="csi-node-driver-ldbbw" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0" Jul 6 23:57:00.051164 systemd-networkd[1418]: cali6f12d281b3c: Link UP Jul 6 23:57:00.053675 systemd-networkd[1418]: cali6f12d281b3c: Gained carrier Jul 6 23:57:00.076953 containerd[1675]: time="2025-07-06T23:57:00.076265155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:00.080657 containerd[1675]: time="2025-07-06T23:57:00.080288753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:00.081343 containerd[1675]: time="2025-07-06T23:57:00.080916369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:00.082067 containerd[1675]: time="2025-07-06T23:57:00.082002195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:00.097472 containerd[1675]: 2025-07-06 23:56:59.803 [INFO][5066] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0 coredns-668d6bf9bc- kube-system 668cd08b-4d24-45a3-a679-683237a42032 943 0 2025-07-06 23:56:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-a-2f8c6d8615 coredns-668d6bf9bc-rv7bm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6f12d281b3c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-rv7bm" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-" Jul 6 23:57:00.097472 containerd[1675]: 2025-07-06 23:56:59.803 [INFO][5066] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-rv7bm" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0" Jul 6 23:57:00.097472 containerd[1675]: 2025-07-06 23:56:59.895 [INFO][5086] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1" HandleID="k8s-pod-network.6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0" Jul 6 23:57:00.097472 containerd[1675]: 2025-07-06 23:56:59.897 [INFO][5086] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1" HandleID="k8s-pod-network.6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000382580), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-a-2f8c6d8615", "pod":"coredns-668d6bf9bc-rv7bm", "timestamp":"2025-07-06 23:56:59.895859866 +0000 UTC"}, Hostname:"ci-4081.3.4-a-2f8c6d8615", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:57:00.097472 containerd[1675]: 2025-07-06 23:56:59.897 [INFO][5086] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:00.097472 containerd[1675]: 2025-07-06 23:56:59.905 [INFO][5086] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:00.097472 containerd[1675]: 2025-07-06 23:56:59.905 [INFO][5086] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-2f8c6d8615' Jul 6 23:57:00.097472 containerd[1675]: 2025-07-06 23:56:59.951 [INFO][5086] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.097472 containerd[1675]: 2025-07-06 23:56:59.959 [INFO][5086] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.097472 containerd[1675]: 2025-07-06 23:56:59.974 [INFO][5086] ipam/ipam.go 511: Trying affinity for 192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.097472 containerd[1675]: 2025-07-06 23:56:59.978 [INFO][5086] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.097472 containerd[1675]: 2025-07-06 23:56:59.989 [INFO][5086] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.097472 containerd[1675]: 2025-07-06 23:56:59.990 [INFO][5086] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.097472 containerd[1675]: 2025-07-06 23:56:59.998 [INFO][5086] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1 Jul 6 23:57:00.097472 containerd[1675]: 2025-07-06 23:57:00.019 [INFO][5086] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.097472 containerd[1675]: 2025-07-06 23:57:00.038 [INFO][5086] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.72.134/26] block=192.168.72.128/26 handle="k8s-pod-network.6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.097472 containerd[1675]: 2025-07-06 23:57:00.039 [INFO][5086] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.134/26] handle="k8s-pod-network.6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.097472 containerd[1675]: 2025-07-06 23:57:00.039 [INFO][5086] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:00.097472 containerd[1675]: 2025-07-06 23:57:00.039 [INFO][5086] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.134/26] IPv6=[] ContainerID="6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1" HandleID="k8s-pod-network.6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0" Jul 6 23:57:00.098823 containerd[1675]: 2025-07-06 23:57:00.044 [INFO][5066] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-rv7bm" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"668cd08b-4d24-45a3-a679-683237a42032", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"", Pod:"coredns-668d6bf9bc-rv7bm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f12d281b3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:00.098823 containerd[1675]: 2025-07-06 23:57:00.045 [INFO][5066] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.134/32] ContainerID="6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-rv7bm" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0" Jul 6 23:57:00.098823 containerd[1675]: 2025-07-06 23:57:00.045 [INFO][5066] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f12d281b3c ContainerID="6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-rv7bm" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0" Jul 6 23:57:00.098823 containerd[1675]: 2025-07-06 23:57:00.052 [INFO][5066] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-rv7bm" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0" Jul 6 23:57:00.098823 containerd[1675]: 2025-07-06 23:57:00.053 [INFO][5066] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-rv7bm" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"668cd08b-4d24-45a3-a679-683237a42032", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1", Pod:"coredns-668d6bf9bc-rv7bm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f12d281b3c", MAC:"d6:8e:32:f2:49:d3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:00.098823 containerd[1675]: 2025-07-06 23:57:00.083 [INFO][5066] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-rv7bm" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0" Jul 6 23:57:00.150224 systemd[1]: Started cri-containerd-beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f.scope - libcontainer container beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f. Jul 6 23:57:00.194872 containerd[1675]: time="2025-07-06T23:57:00.194712837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:00.194872 containerd[1675]: time="2025-07-06T23:57:00.194835840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:00.197881 containerd[1675]: time="2025-07-06T23:57:00.194908242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:00.197881 containerd[1675]: time="2025-07-06T23:57:00.195153248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:00.221977 containerd[1675]: time="2025-07-06T23:57:00.221925999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ldbbw,Uid:80d646f2-c2b8-4ec5-90f1-97a890b8837a,Namespace:calico-system,Attempt:1,} returns sandbox id \"beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f\"" Jul 6 23:57:00.239558 systemd[1]: Started cri-containerd-6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1.scope - libcontainer container 6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1. Jul 6 23:57:00.249118 systemd-networkd[1418]: cali76a5ae9f151: Link UP Jul 6 23:57:00.250269 systemd-networkd[1418]: cali76a5ae9f151: Gained carrier Jul 6 23:57:00.277705 containerd[1675]: time="2025-07-06T23:57:00.277651255Z" level=info msg="StartContainer for \"24bfbea4e45888031957addb5c3288a1ae3f25fc15a8a7a105b320dee9496c1b\" returns successfully" Jul 6 23:57:00.292188 systemd-networkd[1418]: calie94fd8cc1a7: Gained IPv6LL Jul 6 23:57:00.303080 containerd[1675]: 2025-07-06 23:56:59.979 [INFO][5098] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0 goldmane-768f4c5c69- calico-system f7ba973d-dc0d-426a-8adc-f92cde7b6fed 944 0 2025-07-06 23:56:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.4-a-2f8c6d8615 goldmane-768f4c5c69-8nwl8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali76a5ae9f151 [] [] }} ContainerID="ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026" Namespace="calico-system" Pod="goldmane-768f4c5c69-8nwl8" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-" Jul 6 23:57:00.303080 containerd[1675]: 2025-07-06 23:56:59.979 [INFO][5098] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026" Namespace="calico-system" Pod="goldmane-768f4c5c69-8nwl8" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0" Jul 6 23:57:00.303080 containerd[1675]: 2025-07-06 23:57:00.104 [INFO][5154] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026" HandleID="k8s-pod-network.ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0" Jul 6 23:57:00.303080 containerd[1675]: 2025-07-06 23:57:00.104 [INFO][5154] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026" HandleID="k8s-pod-network.ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e400), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-2f8c6d8615", "pod":"goldmane-768f4c5c69-8nwl8", "timestamp":"2025-07-06 23:57:00.100915855 +0000 UTC"}, Hostname:"ci-4081.3.4-a-2f8c6d8615", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:57:00.303080 containerd[1675]: 2025-07-06 23:57:00.104 [INFO][5154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:00.303080 containerd[1675]: 2025-07-06 23:57:00.104 [INFO][5154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:00.303080 containerd[1675]: 2025-07-06 23:57:00.104 [INFO][5154] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-2f8c6d8615' Jul 6 23:57:00.303080 containerd[1675]: 2025-07-06 23:57:00.124 [INFO][5154] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.303080 containerd[1675]: 2025-07-06 23:57:00.134 [INFO][5154] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.303080 containerd[1675]: 2025-07-06 23:57:00.142 [INFO][5154] ipam/ipam.go 511: Trying affinity for 192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.303080 containerd[1675]: 2025-07-06 23:57:00.144 [INFO][5154] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.303080 containerd[1675]: 2025-07-06 23:57:00.156 [INFO][5154] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.303080 containerd[1675]: 2025-07-06 23:57:00.156 [INFO][5154] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.303080 containerd[1675]: 2025-07-06 23:57:00.161 [INFO][5154] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026 Jul 6 23:57:00.303080 containerd[1675]: 2025-07-06 23:57:00.192 [INFO][5154] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.303080 containerd[1675]: 2025-07-06 23:57:00.219 [INFO][5154] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.72.135/26] block=192.168.72.128/26 handle="k8s-pod-network.ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.303080 containerd[1675]: 2025-07-06 23:57:00.220 [INFO][5154] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.135/26] handle="k8s-pod-network.ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.303080 containerd[1675]: 2025-07-06 23:57:00.221 [INFO][5154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:00.303080 containerd[1675]: 2025-07-06 23:57:00.221 [INFO][5154] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.135/26] IPv6=[] ContainerID="ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026" HandleID="k8s-pod-network.ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0" Jul 6 23:57:00.303964 containerd[1675]: 2025-07-06 23:57:00.230 [INFO][5098] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026" Namespace="calico-system" Pod="goldmane-768f4c5c69-8nwl8" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"f7ba973d-dc0d-426a-8adc-f92cde7b6fed", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"", Pod:"goldmane-768f4c5c69-8nwl8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.72.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali76a5ae9f151", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:00.303964 containerd[1675]: 2025-07-06 23:57:00.230 [INFO][5098] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.135/32] ContainerID="ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026" Namespace="calico-system" Pod="goldmane-768f4c5c69-8nwl8" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0" Jul 6 23:57:00.303964 containerd[1675]: 2025-07-06 23:57:00.230 [INFO][5098] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76a5ae9f151 ContainerID="ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026" Namespace="calico-system" Pod="goldmane-768f4c5c69-8nwl8" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0" Jul 6 23:57:00.303964 containerd[1675]: 2025-07-06 23:57:00.257 [INFO][5098] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026" Namespace="calico-system" Pod="goldmane-768f4c5c69-8nwl8" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0" Jul 6 23:57:00.303964 containerd[1675]: 2025-07-06 23:57:00.261 [INFO][5098] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026" Namespace="calico-system" Pod="goldmane-768f4c5c69-8nwl8" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"f7ba973d-dc0d-426a-8adc-f92cde7b6fed", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026", Pod:"goldmane-768f4c5c69-8nwl8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.72.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali76a5ae9f151", MAC:"b2:59:cf:b3:15:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:00.303964 containerd[1675]: 2025-07-06 23:57:00.297 [INFO][5098] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026" Namespace="calico-system" Pod="goldmane-768f4c5c69-8nwl8" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0" Jul 6 23:57:00.343909 systemd-networkd[1418]: caliabdf5a5c51a: Link UP Jul 6 23:57:00.346441 systemd-networkd[1418]: caliabdf5a5c51a: Gained carrier Jul 6 23:57:00.380220 containerd[1675]: 2025-07-06 23:56:59.957 [INFO][5093] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0 coredns-668d6bf9bc- kube-system 8f8547f6-c7c2-4c77-af76-00fb7e939448 945 0 2025-07-06 23:56:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-a-2f8c6d8615 coredns-668d6bf9bc-bz9rt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliabdf5a5c51a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70" Namespace="kube-system" Pod="coredns-668d6bf9bc-bz9rt" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-" Jul 6 23:57:00.380220 containerd[1675]: 2025-07-06 23:56:59.957 [INFO][5093] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70" Namespace="kube-system" Pod="coredns-668d6bf9bc-bz9rt" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0" Jul 6 23:57:00.380220 containerd[1675]: 2025-07-06 23:57:00.161 [INFO][5156] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70" HandleID="k8s-pod-network.88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0" Jul 6 23:57:00.380220 containerd[1675]: 2025-07-06 23:57:00.163 [INFO][5156] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70" HandleID="k8s-pod-network.88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000322ff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-a-2f8c6d8615", "pod":"coredns-668d6bf9bc-bz9rt", "timestamp":"2025-07-06 23:57:00.152836218 +0000 UTC"}, Hostname:"ci-4081.3.4-a-2f8c6d8615", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:57:00.380220 containerd[1675]: 2025-07-06 23:57:00.163 [INFO][5156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:00.380220 containerd[1675]: 2025-07-06 23:57:00.221 [INFO][5156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:00.380220 containerd[1675]: 2025-07-06 23:57:00.221 [INFO][5156] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-2f8c6d8615' Jul 6 23:57:00.380220 containerd[1675]: 2025-07-06 23:57:00.248 [INFO][5156] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.380220 containerd[1675]: 2025-07-06 23:57:00.267 [INFO][5156] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.380220 containerd[1675]: 2025-07-06 23:57:00.276 [INFO][5156] ipam/ipam.go 511: Trying affinity for 192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.380220 containerd[1675]: 2025-07-06 23:57:00.283 [INFO][5156] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.380220 containerd[1675]: 2025-07-06 23:57:00.288 [INFO][5156] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.380220 containerd[1675]: 2025-07-06 23:57:00.288 [INFO][5156] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.380220 containerd[1675]: 2025-07-06 23:57:00.294 [INFO][5156] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70 Jul 6 23:57:00.380220 containerd[1675]: 2025-07-06 23:57:00.304 [INFO][5156] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.380220 containerd[1675]: 2025-07-06 23:57:00.320 [INFO][5156] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.72.136/26] block=192.168.72.128/26 handle="k8s-pod-network.88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.380220 containerd[1675]: 2025-07-06 23:57:00.320 [INFO][5156] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.136/26] handle="k8s-pod-network.88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70" host="ci-4081.3.4-a-2f8c6d8615" Jul 6 23:57:00.380220 containerd[1675]: 2025-07-06 23:57:00.320 [INFO][5156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:00.380220 containerd[1675]: 2025-07-06 23:57:00.320 [INFO][5156] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.136/26] IPv6=[] ContainerID="88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70" HandleID="k8s-pod-network.88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0" Jul 6 23:57:00.382988 containerd[1675]: 2025-07-06 23:57:00.328 [INFO][5093] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70" Namespace="kube-system" Pod="coredns-668d6bf9bc-bz9rt" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8f8547f6-c7c2-4c77-af76-00fb7e939448", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"", Pod:"coredns-668d6bf9bc-bz9rt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliabdf5a5c51a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:00.382988 containerd[1675]: 2025-07-06 23:57:00.328 [INFO][5093] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.136/32] ContainerID="88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70" Namespace="kube-system" Pod="coredns-668d6bf9bc-bz9rt" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0" Jul 6 23:57:00.382988 containerd[1675]: 2025-07-06 23:57:00.328 [INFO][5093] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliabdf5a5c51a ContainerID="88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70" Namespace="kube-system" Pod="coredns-668d6bf9bc-bz9rt" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0" Jul 6 23:57:00.382988 containerd[1675]: 2025-07-06 23:57:00.349 [INFO][5093] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70" Namespace="kube-system" Pod="coredns-668d6bf9bc-bz9rt" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0" Jul 6 23:57:00.382988 containerd[1675]: 2025-07-06 23:57:00.350 [INFO][5093] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70" Namespace="kube-system" Pod="coredns-668d6bf9bc-bz9rt" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8f8547f6-c7c2-4c77-af76-00fb7e939448", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70", Pod:"coredns-668d6bf9bc-bz9rt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliabdf5a5c51a", MAC:"b2:46:bd:59:0a:e7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:00.382988 containerd[1675]: 2025-07-06 23:57:00.369 [INFO][5093] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70" Namespace="kube-system" Pod="coredns-668d6bf9bc-bz9rt" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0" Jul 6 23:57:00.402174 containerd[1675]: time="2025-07-06T23:57:00.399435618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:00.402174 containerd[1675]: time="2025-07-06T23:57:00.399538620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:00.402174 containerd[1675]: time="2025-07-06T23:57:00.399569721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:00.402174 containerd[1675]: time="2025-07-06T23:57:00.399695724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:00.402174 containerd[1675]: time="2025-07-06T23:57:00.400834252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rv7bm,Uid:668cd08b-4d24-45a3-a679-683237a42032,Namespace:kube-system,Attempt:1,} returns sandbox id \"6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1\"" Jul 6 23:57:00.409574 containerd[1675]: time="2025-07-06T23:57:00.409405061Z" level=info msg="CreateContainer within sandbox \"6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:57:00.437456 containerd[1675]: time="2025-07-06T23:57:00.436951231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:00.438152 containerd[1675]: time="2025-07-06T23:57:00.437745250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:00.441492 containerd[1675]: time="2025-07-06T23:57:00.438180161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:00.441492 containerd[1675]: time="2025-07-06T23:57:00.438269663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:00.455518 systemd[1]: Started cri-containerd-ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026.scope - libcontainer container ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026. Jul 6 23:57:00.482240 systemd[1]: Started cri-containerd-88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70.scope - libcontainer container 88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70. Jul 6 23:57:00.496851 containerd[1675]: time="2025-07-06T23:57:00.496809587Z" level=info msg="CreateContainer within sandbox \"6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c0705f54dfb87dfe15b67592781676bd86f03ac7cb2c2307db05d88d032b5645\"" Jul 6 23:57:00.498227 containerd[1675]: time="2025-07-06T23:57:00.498033517Z" level=info msg="StartContainer for \"c0705f54dfb87dfe15b67592781676bd86f03ac7cb2c2307db05d88d032b5645\"" Jul 6 23:57:00.551242 systemd[1]: Started cri-containerd-c0705f54dfb87dfe15b67592781676bd86f03ac7cb2c2307db05d88d032b5645.scope - libcontainer container c0705f54dfb87dfe15b67592781676bd86f03ac7cb2c2307db05d88d032b5645. Jul 6 23:57:00.587596 containerd[1675]: time="2025-07-06T23:57:00.587550695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bz9rt,Uid:8f8547f6-c7c2-4c77-af76-00fb7e939448,Namespace:kube-system,Attempt:1,} returns sandbox id \"88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70\"" Jul 6 23:57:00.587776 containerd[1675]: time="2025-07-06T23:57:00.587750800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8nwl8,Uid:f7ba973d-dc0d-426a-8adc-f92cde7b6fed,Namespace:calico-system,Attempt:1,} returns sandbox id \"ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026\"" Jul 6 23:57:00.594705 containerd[1675]: time="2025-07-06T23:57:00.594655268Z" level=info msg="CreateContainer within sandbox \"88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:57:00.610901 containerd[1675]: time="2025-07-06T23:57:00.610722058Z" level=info msg="StartContainer for \"c0705f54dfb87dfe15b67592781676bd86f03ac7cb2c2307db05d88d032b5645\" returns successfully" Jul 6 23:57:00.642974 containerd[1675]: time="2025-07-06T23:57:00.642706037Z" level=info msg="CreateContainer within sandbox \"88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b665524762029ee33729c1020c0b6c5ebf945705264a1bfe03ac588edf54989\"" Jul 6 23:57:00.645908 containerd[1675]: time="2025-07-06T23:57:00.643660360Z" level=info msg="StartContainer for \"1b665524762029ee33729c1020c0b6c5ebf945705264a1bfe03ac588edf54989\"" Jul 6 23:57:00.720250 kubelet[3121]: I0706 23:57:00.719338 3121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rv7bm" podStartSLOduration=39.7193143 podStartE2EDuration="39.7193143s" podCreationTimestamp="2025-07-06 23:56:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:57:00.652357871 +0000 UTC m=+44.449580831" watchObservedRunningTime="2025-07-06 23:57:00.7193143 +0000 UTC m=+44.516537260" Jul 6 23:57:00.724009 kubelet[3121]: I0706 23:57:00.723714 3121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-58b7fd8c9c-w4wlf" podStartSLOduration=2.443159658 podStartE2EDuration="6.723696707s" podCreationTimestamp="2025-07-06 23:56:54 +0000 UTC" firstStartedPulling="2025-07-06 23:56:55.413497907 +0000 UTC m=+39.210720767" lastFinishedPulling="2025-07-06 23:56:59.694034956 +0000 UTC m=+43.491257816" observedRunningTime="2025-07-06 23:57:00.7233898 +0000 UTC m=+44.520612660" watchObservedRunningTime="2025-07-06 23:57:00.723696707 +0000 UTC m=+44.520919667" Jul 6 23:57:00.739296 systemd[1]: Started cri-containerd-1b665524762029ee33729c1020c0b6c5ebf945705264a1bfe03ac588edf54989.scope - libcontainer container 1b665524762029ee33729c1020c0b6c5ebf945705264a1bfe03ac588edf54989. Jul 6 23:57:00.797308 containerd[1675]: time="2025-07-06T23:57:00.797265297Z" level=info msg="StartContainer for \"1b665524762029ee33729c1020c0b6c5ebf945705264a1bfe03ac588edf54989\" returns successfully" Jul 6 23:57:00.867190 systemd-networkd[1418]: cali11c8c76ac1e: Gained IPv6LL Jul 6 23:57:00.927799 kubelet[3121]: I0706 23:57:00.927181 3121 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:57:01.123252 systemd-networkd[1418]: cali4875da9ee3c: Gained IPv6LL Jul 6 23:57:01.315307 systemd-networkd[1418]: cali6f12d281b3c: Gained IPv6LL Jul 6 23:57:01.668517 kubelet[3121]: I0706 23:57:01.668446 3121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bz9rt" podStartSLOduration=40.668425792 podStartE2EDuration="40.668425792s" podCreationTimestamp="2025-07-06 23:56:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:57:01.66671535 +0000 UTC m=+45.463938310" watchObservedRunningTime="2025-07-06 23:57:01.668425792 +0000 UTC m=+45.465648652" Jul 6 23:57:01.763256 systemd-networkd[1418]: cali76a5ae9f151: Gained IPv6LL Jul 6 23:57:02.020268 systemd-networkd[1418]: caliabdf5a5c51a: Gained IPv6LL Jul 6 23:57:03.055107 containerd[1675]: time="2025-07-06T23:57:03.055044527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:03.058345 containerd[1675]: time="2025-07-06T23:57:03.058263306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 6 23:57:03.062016 containerd[1675]: time="2025-07-06T23:57:03.061969796Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:03.067876 containerd[1675]: time="2025-07-06T23:57:03.067794138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:03.068809 containerd[1675]: time="2025-07-06T23:57:03.068655758Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.37331117s" Jul 6 23:57:03.068809 containerd[1675]: time="2025-07-06T23:57:03.068698060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 6 23:57:03.070282 containerd[1675]: time="2025-07-06T23:57:03.070259597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:57:03.072347 containerd[1675]: time="2025-07-06T23:57:03.072302547Z" level=info msg="CreateContainer within sandbox \"127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:57:03.107006 containerd[1675]: time="2025-07-06T23:57:03.106957390Z" level=info msg="CreateContainer within sandbox \"127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"878a8d581e262416315523144d420abfa3ea7897b3c4b47f7c72f09d05394846\"" Jul 6 23:57:03.108411 containerd[1675]: time="2025-07-06T23:57:03.107583106Z" level=info msg="StartContainer for \"878a8d581e262416315523144d420abfa3ea7897b3c4b47f7c72f09d05394846\"" Jul 6 23:57:03.147206 systemd[1]: Started cri-containerd-878a8d581e262416315523144d420abfa3ea7897b3c4b47f7c72f09d05394846.scope - libcontainer container 878a8d581e262416315523144d420abfa3ea7897b3c4b47f7c72f09d05394846. Jul 6 23:57:03.194091 containerd[1675]: time="2025-07-06T23:57:03.194011708Z" level=info msg="StartContainer for \"878a8d581e262416315523144d420abfa3ea7897b3c4b47f7c72f09d05394846\" returns successfully" Jul 6 23:57:03.386848 containerd[1675]: time="2025-07-06T23:57:03.385896977Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:03.390193 containerd[1675]: time="2025-07-06T23:57:03.389670769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 6 23:57:03.392255 containerd[1675]: time="2025-07-06T23:57:03.392215230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 321.881031ms" Jul 6 23:57:03.392405 containerd[1675]: time="2025-07-06T23:57:03.392387335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 6 23:57:03.393469 containerd[1675]: time="2025-07-06T23:57:03.393446260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 6 23:57:03.395588 containerd[1675]: time="2025-07-06T23:57:03.395541711Z" level=info msg="CreateContainer within sandbox \"a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:57:03.435359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1613981538.mount: Deactivated successfully. Jul 6 23:57:03.438737 containerd[1675]: time="2025-07-06T23:57:03.438558958Z" level=info msg="CreateContainer within sandbox \"a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c8e6680691a96c8dc142f3a1f00c3de0e224a8942e0c633e88c1dacf9846a531\"" Jul 6 23:57:03.440237 containerd[1675]: time="2025-07-06T23:57:03.440084795Z" level=info msg="StartContainer for \"c8e6680691a96c8dc142f3a1f00c3de0e224a8942e0c633e88c1dacf9846a531\"" Jul 6 23:57:03.474218 systemd[1]: Started cri-containerd-c8e6680691a96c8dc142f3a1f00c3de0e224a8942e0c633e88c1dacf9846a531.scope - libcontainer container c8e6680691a96c8dc142f3a1f00c3de0e224a8942e0c633e88c1dacf9846a531. Jul 6 23:57:03.529503 containerd[1675]: time="2025-07-06T23:57:03.529459070Z" level=info msg="StartContainer for \"c8e6680691a96c8dc142f3a1f00c3de0e224a8942e0c633e88c1dacf9846a531\" returns successfully" Jul 6 23:57:03.690335 kubelet[3121]: I0706 23:57:03.690173 3121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b4985b7cd-r7qrd" podStartSLOduration=28.375752283 podStartE2EDuration="33.690150479s" podCreationTimestamp="2025-07-06 23:56:30 +0000 UTC" firstStartedPulling="2025-07-06 23:56:57.755538294 +0000 UTC m=+41.552761154" lastFinishedPulling="2025-07-06 23:57:03.06993649 +0000 UTC m=+46.867159350" observedRunningTime="2025-07-06 23:57:03.688812947 +0000 UTC m=+47.486035807" watchObservedRunningTime="2025-07-06 23:57:03.690150479 +0000 UTC m=+47.487373439" Jul 6 23:57:03.716873 kubelet[3121]: I0706 23:57:03.716801 3121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b4985b7cd-b2wsx" podStartSLOduration=29.515082902 podStartE2EDuration="33.716777527s" podCreationTimestamp="2025-07-06 23:56:30 +0000 UTC" firstStartedPulling="2025-07-06 23:56:59.191563331 +0000 UTC m=+42.988786191" lastFinishedPulling="2025-07-06 23:57:03.393257956 +0000 UTC m=+47.190480816" observedRunningTime="2025-07-06 23:57:03.716465619 +0000 UTC m=+47.513688579" watchObservedRunningTime="2025-07-06 23:57:03.716777527 +0000 UTC m=+47.514000487" Jul 6 23:57:04.663482 kubelet[3121]: I0706 23:57:04.663438 3121 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:57:08.170512 containerd[1675]: time="2025-07-06T23:57:08.170339576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:08.174361 containerd[1675]: time="2025-07-06T23:57:08.174110071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 6 23:57:08.178396 containerd[1675]: time="2025-07-06T23:57:08.177832064Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:08.184745 containerd[1675]: time="2025-07-06T23:57:08.184639635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:08.186086 containerd[1675]: time="2025-07-06T23:57:08.185840465Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 4.792245901s" Jul 6 23:57:08.186086 containerd[1675]: time="2025-07-06T23:57:08.185881766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 6 23:57:08.187452 containerd[1675]: time="2025-07-06T23:57:08.187423105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 6 23:57:08.220895 containerd[1675]: time="2025-07-06T23:57:08.220478836Z" level=info msg="CreateContainer within sandbox \"f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 6 23:57:08.267398 containerd[1675]: time="2025-07-06T23:57:08.267346614Z" level=info msg="CreateContainer within sandbox \"f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c3f022c0652b05a9565988f51eda93a5b02cb4c08eb93f206a8869477503648e\"" Jul 6 23:57:08.269554 containerd[1675]: time="2025-07-06T23:57:08.268365340Z" level=info msg="StartContainer for \"c3f022c0652b05a9565988f51eda93a5b02cb4c08eb93f206a8869477503648e\"" Jul 6 23:57:08.315420 systemd[1]: Started cri-containerd-c3f022c0652b05a9565988f51eda93a5b02cb4c08eb93f206a8869477503648e.scope - libcontainer container c3f022c0652b05a9565988f51eda93a5b02cb4c08eb93f206a8869477503648e. Jul 6 23:57:08.392346 containerd[1675]: time="2025-07-06T23:57:08.392300055Z" level=info msg="StartContainer for \"c3f022c0652b05a9565988f51eda93a5b02cb4c08eb93f206a8869477503648e\" returns successfully" Jul 6 23:57:08.711040 kubelet[3121]: I0706 23:57:08.710662 3121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-68c8fc6bd-5mrzx" podStartSLOduration=25.769673318 podStartE2EDuration="34.710502453s" podCreationTimestamp="2025-07-06 23:56:34 +0000 UTC" firstStartedPulling="2025-07-06 23:56:59.246242161 +0000 UTC m=+43.043465021" lastFinishedPulling="2025-07-06 23:57:08.187071296 +0000 UTC m=+51.984294156" observedRunningTime="2025-07-06 23:57:08.708906113 +0000 UTC m=+52.506128973" watchObservedRunningTime="2025-07-06 23:57:08.710502453 +0000 UTC m=+52.507725313" Jul 6 23:57:09.887357 containerd[1675]: time="2025-07-06T23:57:09.886398709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:09.889380 containerd[1675]: time="2025-07-06T23:57:09.889296982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 6 23:57:09.897826 containerd[1675]: time="2025-07-06T23:57:09.897771195Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:09.905065 containerd[1675]: time="2025-07-06T23:57:09.904999777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:09.906247 containerd[1675]: time="2025-07-06T23:57:09.906062004Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.718599797s" Jul 6 23:57:09.906247 containerd[1675]: time="2025-07-06T23:57:09.906098305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 6 23:57:09.908544 containerd[1675]: time="2025-07-06T23:57:09.908510665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 6 23:57:09.909654 containerd[1675]: time="2025-07-06T23:57:09.909620893Z" level=info msg="CreateContainer within sandbox \"beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 6 23:57:09.958163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount212642055.mount: Deactivated successfully. Jul 6 23:57:09.977286 containerd[1675]: time="2025-07-06T23:57:09.977232993Z" level=info msg="CreateContainer within sandbox \"beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"17602da390fe55e6d29b1043fe019cc8af7cee165247826731e058c09741f813\"" Jul 6 23:57:09.980055 containerd[1675]: time="2025-07-06T23:57:09.978687729Z" level=info msg="StartContainer for \"17602da390fe55e6d29b1043fe019cc8af7cee165247826731e058c09741f813\"" Jul 6 23:57:10.027227 systemd[1]: Started cri-containerd-17602da390fe55e6d29b1043fe019cc8af7cee165247826731e058c09741f813.scope - libcontainer container 17602da390fe55e6d29b1043fe019cc8af7cee165247826731e058c09741f813. Jul 6 23:57:10.072417 containerd[1675]: time="2025-07-06T23:57:10.072362084Z" level=info msg="StartContainer for \"17602da390fe55e6d29b1043fe019cc8af7cee165247826731e058c09741f813\" returns successfully" Jul 6 23:57:13.417836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1164395000.mount: Deactivated successfully. Jul 6 23:57:16.369089 containerd[1675]: time="2025-07-06T23:57:16.368685569Z" level=info msg="StopPodSandbox for \"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\"" Jul 6 23:57:16.601344 containerd[1675]: 2025-07-06 23:57:16.534 [WARNING][5748] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"668cd08b-4d24-45a3-a679-683237a42032", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1", Pod:"coredns-668d6bf9bc-rv7bm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f12d281b3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:16.601344 containerd[1675]: 2025-07-06 23:57:16.535 [INFO][5748] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Jul 6 23:57:16.601344 containerd[1675]: 2025-07-06 23:57:16.535 [INFO][5748] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" iface="eth0" netns="" Jul 6 23:57:16.601344 containerd[1675]: 2025-07-06 23:57:16.535 [INFO][5748] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Jul 6 23:57:16.601344 containerd[1675]: 2025-07-06 23:57:16.535 [INFO][5748] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Jul 6 23:57:16.601344 containerd[1675]: 2025-07-06 23:57:16.581 [INFO][5757] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" HandleID="k8s-pod-network.70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0" Jul 6 23:57:16.601344 containerd[1675]: 2025-07-06 23:57:16.582 [INFO][5757] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:16.601344 containerd[1675]: 2025-07-06 23:57:16.582 [INFO][5757] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:16.601344 containerd[1675]: 2025-07-06 23:57:16.591 [WARNING][5757] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" HandleID="k8s-pod-network.70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0" Jul 6 23:57:16.601344 containerd[1675]: 2025-07-06 23:57:16.591 [INFO][5757] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" HandleID="k8s-pod-network.70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0" Jul 6 23:57:16.601344 containerd[1675]: 2025-07-06 23:57:16.592 [INFO][5757] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:16.601344 containerd[1675]: 2025-07-06 23:57:16.596 [INFO][5748] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Jul 6 23:57:16.601344 containerd[1675]: time="2025-07-06T23:57:16.600228355Z" level=info msg="TearDown network for sandbox \"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\" successfully" Jul 6 23:57:16.601344 containerd[1675]: time="2025-07-06T23:57:16.600266256Z" level=info msg="StopPodSandbox for \"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\" returns successfully" Jul 6 23:57:16.601344 containerd[1675]: time="2025-07-06T23:57:16.600916072Z" level=info msg="RemovePodSandbox for \"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\"" Jul 6 23:57:16.601344 containerd[1675]: time="2025-07-06T23:57:16.600951773Z" level=info msg="Forcibly stopping sandbox \"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\"" Jul 6 23:57:16.624749 containerd[1675]: time="2025-07-06T23:57:16.623870946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 6 23:57:16.624890 containerd[1675]: time="2025-07-06T23:57:16.624765768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:16.629965 containerd[1675]: time="2025-07-06T23:57:16.629900996Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:16.636481 containerd[1675]: time="2025-07-06T23:57:16.635924847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:16.648116 containerd[1675]: time="2025-07-06T23:57:16.645292181Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 6.736740115s" Jul 6 23:57:16.648116 containerd[1675]: time="2025-07-06T23:57:16.645344582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 6 23:57:16.654072 containerd[1675]: time="2025-07-06T23:57:16.652878271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 6 23:57:16.655366 containerd[1675]: time="2025-07-06T23:57:16.655326032Z" level=info msg="CreateContainer within sandbox \"ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 6 23:57:16.707769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2282054955.mount: Deactivated successfully. Jul 6 23:57:16.709594 containerd[1675]: time="2025-07-06T23:57:16.709305481Z" level=info msg="CreateContainer within sandbox \"ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a6e13348983a071219476df7f98a86d4d77426df4798a1e77e2a7977dfefd4a0\"" Jul 6 23:57:16.713454 containerd[1675]: time="2025-07-06T23:57:16.712520261Z" level=info msg="StartContainer for \"a6e13348983a071219476df7f98a86d4d77426df4798a1e77e2a7977dfefd4a0\"" Jul 6 23:57:16.790238 systemd[1]: Started cri-containerd-a6e13348983a071219476df7f98a86d4d77426df4798a1e77e2a7977dfefd4a0.scope - libcontainer container a6e13348983a071219476df7f98a86d4d77426df4798a1e77e2a7977dfefd4a0. Jul 6 23:57:16.816353 containerd[1675]: 2025-07-06 23:57:16.695 [WARNING][5776] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"668cd08b-4d24-45a3-a679-683237a42032", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"6e8400f9fa1c9ecdabe189cc8ad14a16a79cf651071fc56eab97aed1f71bf8c1", Pod:"coredns-668d6bf9bc-rv7bm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f12d281b3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:16.816353 containerd[1675]: 2025-07-06 23:57:16.695 [INFO][5776] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Jul 6 23:57:16.816353 containerd[1675]: 2025-07-06 23:57:16.695 [INFO][5776] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" iface="eth0" netns="" Jul 6 23:57:16.816353 containerd[1675]: 2025-07-06 23:57:16.695 [INFO][5776] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Jul 6 23:57:16.816353 containerd[1675]: 2025-07-06 23:57:16.695 [INFO][5776] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Jul 6 23:57:16.816353 containerd[1675]: 2025-07-06 23:57:16.787 [INFO][5784] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" HandleID="k8s-pod-network.70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0" Jul 6 23:57:16.816353 containerd[1675]: 2025-07-06 23:57:16.788 [INFO][5784] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:16.816353 containerd[1675]: 2025-07-06 23:57:16.788 [INFO][5784] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:16.816353 containerd[1675]: 2025-07-06 23:57:16.803 [WARNING][5784] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" HandleID="k8s-pod-network.70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0" Jul 6 23:57:16.816353 containerd[1675]: 2025-07-06 23:57:16.803 [INFO][5784] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" HandleID="k8s-pod-network.70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--rv7bm-eth0" Jul 6 23:57:16.816353 containerd[1675]: 2025-07-06 23:57:16.808 [INFO][5784] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:16.816353 containerd[1675]: 2025-07-06 23:57:16.812 [INFO][5776] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719" Jul 6 23:57:16.816991 containerd[1675]: time="2025-07-06T23:57:16.816399357Z" level=info msg="TearDown network for sandbox \"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\" successfully" Jul 6 23:57:16.830262 containerd[1675]: time="2025-07-06T23:57:16.830202102Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:16.830408 containerd[1675]: time="2025-07-06T23:57:16.830309404Z" level=info msg="RemovePodSandbox \"70a4337085e88c9f48bccd5bc9e469714569e2e9f78c14a20652df95ec61b719\" returns successfully" Jul 6 23:57:16.833163 containerd[1675]: time="2025-07-06T23:57:16.830875018Z" level=info msg="StopPodSandbox for \"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\"" Jul 6 23:57:16.963739 containerd[1675]: time="2025-07-06T23:57:16.962711413Z" level=info msg="StartContainer for \"a6e13348983a071219476df7f98a86d4d77426df4798a1e77e2a7977dfefd4a0\" returns successfully" Jul 6 23:57:17.012586 containerd[1675]: 2025-07-06 23:57:16.920 [WARNING][5823] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8f8547f6-c7c2-4c77-af76-00fb7e939448", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70", Pod:"coredns-668d6bf9bc-bz9rt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliabdf5a5c51a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:17.012586 containerd[1675]: 2025-07-06 23:57:16.921 [INFO][5823] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Jul 6 23:57:17.012586 containerd[1675]: 2025-07-06 23:57:16.921 [INFO][5823] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" iface="eth0" netns="" Jul 6 23:57:17.012586 containerd[1675]: 2025-07-06 23:57:16.921 [INFO][5823] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Jul 6 23:57:17.012586 containerd[1675]: 2025-07-06 23:57:16.921 [INFO][5823] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Jul 6 23:57:17.012586 containerd[1675]: 2025-07-06 23:57:16.985 [INFO][5838] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" HandleID="k8s-pod-network.74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0" Jul 6 23:57:17.012586 containerd[1675]: 2025-07-06 23:57:16.988 [INFO][5838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:17.012586 containerd[1675]: 2025-07-06 23:57:16.988 [INFO][5838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:17.012586 containerd[1675]: 2025-07-06 23:57:17.004 [WARNING][5838] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" HandleID="k8s-pod-network.74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0" Jul 6 23:57:17.012586 containerd[1675]: 2025-07-06 23:57:17.004 [INFO][5838] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" HandleID="k8s-pod-network.74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0" Jul 6 23:57:17.012586 containerd[1675]: 2025-07-06 23:57:17.006 [INFO][5838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:17.012586 containerd[1675]: 2025-07-06 23:57:17.009 [INFO][5823] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Jul 6 23:57:17.013304 containerd[1675]: time="2025-07-06T23:57:17.012640860Z" level=info msg="TearDown network for sandbox \"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\" successfully" Jul 6 23:57:17.013304 containerd[1675]: time="2025-07-06T23:57:17.012671661Z" level=info msg="StopPodSandbox for \"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\" returns successfully" Jul 6 23:57:17.014599 containerd[1675]: time="2025-07-06T23:57:17.014248701Z" level=info msg="RemovePodSandbox for \"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\"" Jul 6 23:57:17.014599 containerd[1675]: time="2025-07-06T23:57:17.014287402Z" level=info msg="Forcibly stopping sandbox \"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\"" Jul 6 23:57:17.197138 containerd[1675]: 2025-07-06 23:57:17.114 [WARNING][5853] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8f8547f6-c7c2-4c77-af76-00fb7e939448", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"88dd82295fa83f134f892881537d7077461294d9fa3035c3eaf551e47a453f70", Pod:"coredns-668d6bf9bc-bz9rt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliabdf5a5c51a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:17.197138 containerd[1675]: 2025-07-06 23:57:17.114 [INFO][5853] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Jul 6 23:57:17.197138 containerd[1675]: 2025-07-06 23:57:17.118 [INFO][5853] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" iface="eth0" netns="" Jul 6 23:57:17.197138 containerd[1675]: 2025-07-06 23:57:17.118 [INFO][5853] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Jul 6 23:57:17.197138 containerd[1675]: 2025-07-06 23:57:17.118 [INFO][5853] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Jul 6 23:57:17.197138 containerd[1675]: 2025-07-06 23:57:17.171 [INFO][5863] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" HandleID="k8s-pod-network.74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0" Jul 6 23:57:17.197138 containerd[1675]: 2025-07-06 23:57:17.171 [INFO][5863] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:17.197138 containerd[1675]: 2025-07-06 23:57:17.171 [INFO][5863] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:17.197138 containerd[1675]: 2025-07-06 23:57:17.189 [WARNING][5863] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" HandleID="k8s-pod-network.74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0" Jul 6 23:57:17.197138 containerd[1675]: 2025-07-06 23:57:17.189 [INFO][5863] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" HandleID="k8s-pod-network.74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-coredns--668d6bf9bc--bz9rt-eth0" Jul 6 23:57:17.197138 containerd[1675]: 2025-07-06 23:57:17.191 [INFO][5863] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:17.197138 containerd[1675]: 2025-07-06 23:57:17.193 [INFO][5853] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378" Jul 6 23:57:17.197793 containerd[1675]: time="2025-07-06T23:57:17.197194372Z" level=info msg="TearDown network for sandbox \"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\" successfully" Jul 6 23:57:17.205810 containerd[1675]: time="2025-07-06T23:57:17.205750086Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:17.205995 containerd[1675]: time="2025-07-06T23:57:17.205839188Z" level=info msg="RemovePodSandbox \"74414927d8c2bd34a1651fb5131c323ab443dc9e70cf891943be061b5c066378\" returns successfully" Jul 6 23:57:17.206438 containerd[1675]: time="2025-07-06T23:57:17.206407302Z" level=info msg="StopPodSandbox for \"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\"" Jul 6 23:57:17.313282 containerd[1675]: 2025-07-06 23:57:17.258 [WARNING][5878] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-whisker--5497dd78ff--5pz86-eth0" Jul 6 23:57:17.313282 containerd[1675]: 2025-07-06 23:57:17.259 [INFO][5878] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Jul 6 23:57:17.313282 containerd[1675]: 2025-07-06 23:57:17.259 [INFO][5878] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" iface="eth0" netns="" Jul 6 23:57:17.313282 containerd[1675]: 2025-07-06 23:57:17.259 [INFO][5878] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Jul 6 23:57:17.313282 containerd[1675]: 2025-07-06 23:57:17.259 [INFO][5878] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Jul 6 23:57:17.313282 containerd[1675]: 2025-07-06 23:57:17.293 [INFO][5885] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" HandleID="k8s-pod-network.f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-whisker--5497dd78ff--5pz86-eth0" Jul 6 23:57:17.313282 containerd[1675]: 2025-07-06 23:57:17.294 [INFO][5885] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:17.313282 containerd[1675]: 2025-07-06 23:57:17.294 [INFO][5885] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:17.313282 containerd[1675]: 2025-07-06 23:57:17.305 [WARNING][5885] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" HandleID="k8s-pod-network.f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-whisker--5497dd78ff--5pz86-eth0" Jul 6 23:57:17.313282 containerd[1675]: 2025-07-06 23:57:17.306 [INFO][5885] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" HandleID="k8s-pod-network.f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-whisker--5497dd78ff--5pz86-eth0" Jul 6 23:57:17.313282 containerd[1675]: 2025-07-06 23:57:17.310 [INFO][5885] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:17.313282 containerd[1675]: 2025-07-06 23:57:17.311 [INFO][5878] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Jul 6 23:57:17.315158 containerd[1675]: time="2025-07-06T23:57:17.314127794Z" level=info msg="TearDown network for sandbox \"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\" successfully" Jul 6 23:57:17.315158 containerd[1675]: time="2025-07-06T23:57:17.314178495Z" level=info msg="StopPodSandbox for \"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\" returns successfully" Jul 6 23:57:17.315935 containerd[1675]: time="2025-07-06T23:57:17.315895738Z" level=info msg="RemovePodSandbox for \"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\"" Jul 6 23:57:17.316040 containerd[1675]: time="2025-07-06T23:57:17.315942439Z" level=info msg="Forcibly stopping sandbox \"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\"" Jul 6 23:57:17.420834 containerd[1675]: 2025-07-06 23:57:17.365 [WARNING][5901] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" WorkloadEndpoint="ci--4081.3.4--a--2f8c6d8615-k8s-whisker--5497dd78ff--5pz86-eth0" Jul 6 23:57:17.420834 containerd[1675]: 2025-07-06 23:57:17.365 [INFO][5901] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Jul 6 23:57:17.420834 containerd[1675]: 2025-07-06 23:57:17.365 [INFO][5901] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" iface="eth0" netns="" Jul 6 23:57:17.420834 containerd[1675]: 2025-07-06 23:57:17.365 [INFO][5901] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Jul 6 23:57:17.420834 containerd[1675]: 2025-07-06 23:57:17.365 [INFO][5901] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Jul 6 23:57:17.420834 containerd[1675]: 2025-07-06 23:57:17.404 [INFO][5909] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" HandleID="k8s-pod-network.f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-whisker--5497dd78ff--5pz86-eth0" Jul 6 23:57:17.420834 containerd[1675]: 2025-07-06 23:57:17.405 [INFO][5909] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:17.420834 containerd[1675]: 2025-07-06 23:57:17.405 [INFO][5909] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:17.420834 containerd[1675]: 2025-07-06 23:57:17.415 [WARNING][5909] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" HandleID="k8s-pod-network.f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-whisker--5497dd78ff--5pz86-eth0" Jul 6 23:57:17.420834 containerd[1675]: 2025-07-06 23:57:17.415 [INFO][5909] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" HandleID="k8s-pod-network.f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-whisker--5497dd78ff--5pz86-eth0" Jul 6 23:57:17.420834 containerd[1675]: 2025-07-06 23:57:17.417 [INFO][5909] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:17.420834 containerd[1675]: 2025-07-06 23:57:17.419 [INFO][5901] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47" Jul 6 23:57:17.421771 containerd[1675]: time="2025-07-06T23:57:17.420922563Z" level=info msg="TearDown network for sandbox \"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\" successfully" Jul 6 23:57:17.434043 containerd[1675]: time="2025-07-06T23:57:17.431906337Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:17.434043 containerd[1675]: time="2025-07-06T23:57:17.432042341Z" level=info msg="RemovePodSandbox \"f3a3e664af68a188d5f9b393962d2b685b83a77e3bf5e9f853da106818b68b47\" returns successfully" Jul 6 23:57:17.434043 containerd[1675]: time="2025-07-06T23:57:17.432524453Z" level=info msg="StopPodSandbox for \"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\"" Jul 6 23:57:17.551992 containerd[1675]: 2025-07-06 23:57:17.493 [WARNING][5925] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0", GenerateName:"calico-kube-controllers-68c8fc6bd-", Namespace:"calico-system", SelfLink:"", UID:"3050acde-8e24-48b0-af1c-c0021f4ca060", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68c8fc6bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2", Pod:"calico-kube-controllers-68c8fc6bd-5mrzx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie94fd8cc1a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:17.551992 containerd[1675]: 2025-07-06 23:57:17.494 [INFO][5925] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Jul 6 23:57:17.551992 containerd[1675]: 2025-07-06 23:57:17.494 [INFO][5925] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" iface="eth0" netns="" Jul 6 23:57:17.551992 containerd[1675]: 2025-07-06 23:57:17.494 [INFO][5925] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Jul 6 23:57:17.551992 containerd[1675]: 2025-07-06 23:57:17.494 [INFO][5925] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Jul 6 23:57:17.551992 containerd[1675]: 2025-07-06 23:57:17.528 [INFO][5932] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" HandleID="k8s-pod-network.4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0" Jul 6 23:57:17.551992 containerd[1675]: 2025-07-06 23:57:17.528 [INFO][5932] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:17.551992 containerd[1675]: 2025-07-06 23:57:17.529 [INFO][5932] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:17.551992 containerd[1675]: 2025-07-06 23:57:17.544 [WARNING][5932] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" HandleID="k8s-pod-network.4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0" Jul 6 23:57:17.551992 containerd[1675]: 2025-07-06 23:57:17.545 [INFO][5932] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" HandleID="k8s-pod-network.4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0" Jul 6 23:57:17.551992 containerd[1675]: 2025-07-06 23:57:17.548 [INFO][5932] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:17.551992 containerd[1675]: 2025-07-06 23:57:17.549 [INFO][5925] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Jul 6 23:57:17.551992 containerd[1675]: time="2025-07-06T23:57:17.551738232Z" level=info msg="TearDown network for sandbox \"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\" successfully" Jul 6 23:57:17.551992 containerd[1675]: time="2025-07-06T23:57:17.551951637Z" level=info msg="StopPodSandbox for \"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\" returns successfully" Jul 6 23:57:17.553249 containerd[1675]: time="2025-07-06T23:57:17.553207768Z" level=info msg="RemovePodSandbox for \"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\"" Jul 6 23:57:17.553363 containerd[1675]: time="2025-07-06T23:57:17.553255969Z" level=info msg="Forcibly stopping sandbox \"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\"" Jul 6 23:57:17.677106 containerd[1675]: 2025-07-06 23:57:17.621 [WARNING][5947] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0", GenerateName:"calico-kube-controllers-68c8fc6bd-", Namespace:"calico-system", SelfLink:"", UID:"3050acde-8e24-48b0-af1c-c0021f4ca060", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68c8fc6bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"f7cbad31528850ff02eaa17018cbc39719b776c54ce78933d1b44675f2977ec2", Pod:"calico-kube-controllers-68c8fc6bd-5mrzx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie94fd8cc1a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:17.677106 containerd[1675]: 2025-07-06 23:57:17.622 [INFO][5947] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Jul 6 23:57:17.677106 containerd[1675]: 2025-07-06 23:57:17.622 [INFO][5947] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" iface="eth0" netns="" Jul 6 23:57:17.677106 containerd[1675]: 2025-07-06 23:57:17.622 [INFO][5947] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Jul 6 23:57:17.677106 containerd[1675]: 2025-07-06 23:57:17.622 [INFO][5947] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Jul 6 23:57:17.677106 containerd[1675]: 2025-07-06 23:57:17.660 [INFO][5954] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" HandleID="k8s-pod-network.4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0" Jul 6 23:57:17.677106 containerd[1675]: 2025-07-06 23:57:17.660 [INFO][5954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:17.677106 containerd[1675]: 2025-07-06 23:57:17.660 [INFO][5954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:17.677106 containerd[1675]: 2025-07-06 23:57:17.669 [WARNING][5954] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" HandleID="k8s-pod-network.4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0" Jul 6 23:57:17.677106 containerd[1675]: 2025-07-06 23:57:17.669 [INFO][5954] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" HandleID="k8s-pod-network.4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--kube--controllers--68c8fc6bd--5mrzx-eth0" Jul 6 23:57:17.677106 containerd[1675]: 2025-07-06 23:57:17.671 [INFO][5954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:17.677106 containerd[1675]: 2025-07-06 23:57:17.673 [INFO][5947] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542" Jul 6 23:57:17.677106 containerd[1675]: time="2025-07-06T23:57:17.675826932Z" level=info msg="TearDown network for sandbox \"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\" successfully" Jul 6 23:57:17.686074 containerd[1675]: time="2025-07-06T23:57:17.685834482Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:17.686074 containerd[1675]: time="2025-07-06T23:57:17.685956085Z" level=info msg="RemovePodSandbox \"4008710c01c45c4d936b6825a220ba2b4ba5d47cdbb1759f998db1fa59d32542\" returns successfully" Jul 6 23:57:17.687274 containerd[1675]: time="2025-07-06T23:57:17.686856608Z" level=info msg="StopPodSandbox for \"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\"" Jul 6 23:57:17.777436 kubelet[3121]: I0706 23:57:17.775001 3121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-8nwl8" podStartSLOduration=28.715326872 podStartE2EDuration="44.77497861s" podCreationTimestamp="2025-07-06 23:56:33 +0000 UTC" firstStartedPulling="2025-07-06 23:57:00.592584717 +0000 UTC m=+44.389807577" lastFinishedPulling="2025-07-06 23:57:16.652236455 +0000 UTC m=+60.449459315" observedRunningTime="2025-07-06 23:57:17.773433871 +0000 UTC m=+61.570656731" watchObservedRunningTime="2025-07-06 23:57:17.77497861 +0000 UTC m=+61.572201470" Jul 6 23:57:17.820590 systemd[1]: run-containerd-runc-k8s.io-a6e13348983a071219476df7f98a86d4d77426df4798a1e77e2a7977dfefd4a0-runc.TyBaJn.mount: Deactivated successfully. Jul 6 23:57:17.921227 containerd[1675]: 2025-07-06 23:57:17.800 [WARNING][5968] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"80d646f2-c2b8-4ec5-90f1-97a890b8837a", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f", Pod:"csi-node-driver-ldbbw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4875da9ee3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:17.921227 containerd[1675]: 2025-07-06 23:57:17.800 [INFO][5968] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Jul 6 23:57:17.921227 containerd[1675]: 2025-07-06 23:57:17.800 [INFO][5968] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" iface="eth0" netns="" Jul 6 23:57:17.921227 containerd[1675]: 2025-07-06 23:57:17.800 [INFO][5968] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Jul 6 23:57:17.921227 containerd[1675]: 2025-07-06 23:57:17.800 [INFO][5968] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Jul 6 23:57:17.921227 containerd[1675]: 2025-07-06 23:57:17.900 [INFO][5986] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" HandleID="k8s-pod-network.18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0" Jul 6 23:57:17.921227 containerd[1675]: 2025-07-06 23:57:17.900 [INFO][5986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:17.921227 containerd[1675]: 2025-07-06 23:57:17.900 [INFO][5986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:17.921227 containerd[1675]: 2025-07-06 23:57:17.909 [WARNING][5986] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" HandleID="k8s-pod-network.18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0" Jul 6 23:57:17.921227 containerd[1675]: 2025-07-06 23:57:17.909 [INFO][5986] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" HandleID="k8s-pod-network.18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0" Jul 6 23:57:17.921227 containerd[1675]: 2025-07-06 23:57:17.912 [INFO][5986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:17.921227 containerd[1675]: 2025-07-06 23:57:17.915 [INFO][5968] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Jul 6 23:57:17.923491 containerd[1675]: time="2025-07-06T23:57:17.921225864Z" level=info msg="TearDown network for sandbox \"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\" successfully" Jul 6 23:57:17.923491 containerd[1675]: time="2025-07-06T23:57:17.921258765Z" level=info msg="StopPodSandbox for \"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\" returns successfully" Jul 6 23:57:17.923569 containerd[1675]: time="2025-07-06T23:57:17.923499121Z" level=info msg="RemovePodSandbox for \"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\"" Jul 6 23:57:17.923569 containerd[1675]: time="2025-07-06T23:57:17.923535022Z" level=info msg="Forcibly stopping sandbox \"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\"" Jul 6 23:57:18.094147 containerd[1675]: 2025-07-06 23:57:18.024 [WARNING][6009] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"80d646f2-c2b8-4ec5-90f1-97a890b8837a", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f", Pod:"csi-node-driver-ldbbw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4875da9ee3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:18.094147 containerd[1675]: 2025-07-06 23:57:18.025 [INFO][6009] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Jul 6 23:57:18.094147 containerd[1675]: 2025-07-06 23:57:18.025 [INFO][6009] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" iface="eth0" netns="" Jul 6 23:57:18.094147 containerd[1675]: 2025-07-06 23:57:18.025 [INFO][6009] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Jul 6 23:57:18.094147 containerd[1675]: 2025-07-06 23:57:18.025 [INFO][6009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Jul 6 23:57:18.094147 containerd[1675]: 2025-07-06 23:57:18.061 [INFO][6019] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" HandleID="k8s-pod-network.18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0" Jul 6 23:57:18.094147 containerd[1675]: 2025-07-06 23:57:18.061 [INFO][6019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:18.094147 containerd[1675]: 2025-07-06 23:57:18.061 [INFO][6019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:18.094147 containerd[1675]: 2025-07-06 23:57:18.083 [WARNING][6019] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" HandleID="k8s-pod-network.18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0" Jul 6 23:57:18.094147 containerd[1675]: 2025-07-06 23:57:18.084 [INFO][6019] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" HandleID="k8s-pod-network.18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-csi--node--driver--ldbbw-eth0" Jul 6 23:57:18.094147 containerd[1675]: 2025-07-06 23:57:18.087 [INFO][6019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:18.094147 containerd[1675]: 2025-07-06 23:57:18.092 [INFO][6009] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705" Jul 6 23:57:18.095083 containerd[1675]: time="2025-07-06T23:57:18.094933805Z" level=info msg="TearDown network for sandbox \"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\" successfully" Jul 6 23:57:18.107833 containerd[1675]: time="2025-07-06T23:57:18.107784026Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:18.108323 containerd[1675]: time="2025-07-06T23:57:18.108291439Z" level=info msg="RemovePodSandbox \"18ccdbb38577b1df0edbfbefd1d746757ce27adb68ba10f5a031f852ec6a1705\" returns successfully" Jul 6 23:57:18.109360 containerd[1675]: time="2025-07-06T23:57:18.109332565Z" level=info msg="StopPodSandbox for \"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\"" Jul 6 23:57:18.319502 containerd[1675]: 2025-07-06 23:57:18.199 [WARNING][6039] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"f7ba973d-dc0d-426a-8adc-f92cde7b6fed", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026", Pod:"goldmane-768f4c5c69-8nwl8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.72.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali76a5ae9f151", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:18.319502 containerd[1675]: 2025-07-06 23:57:18.199 [INFO][6039] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Jul 6 23:57:18.319502 containerd[1675]: 2025-07-06 23:57:18.199 [INFO][6039] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" iface="eth0" netns="" Jul 6 23:57:18.319502 containerd[1675]: 2025-07-06 23:57:18.199 [INFO][6039] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Jul 6 23:57:18.319502 containerd[1675]: 2025-07-06 23:57:18.199 [INFO][6039] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Jul 6 23:57:18.319502 containerd[1675]: 2025-07-06 23:57:18.295 [INFO][6046] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" HandleID="k8s-pod-network.7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0" Jul 6 23:57:18.319502 containerd[1675]: 2025-07-06 23:57:18.295 [INFO][6046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:18.319502 containerd[1675]: 2025-07-06 23:57:18.295 [INFO][6046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:18.319502 containerd[1675]: 2025-07-06 23:57:18.308 [WARNING][6046] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" HandleID="k8s-pod-network.7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0" Jul 6 23:57:18.319502 containerd[1675]: 2025-07-06 23:57:18.308 [INFO][6046] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" HandleID="k8s-pod-network.7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0" Jul 6 23:57:18.319502 containerd[1675]: 2025-07-06 23:57:18.310 [INFO][6046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:18.319502 containerd[1675]: 2025-07-06 23:57:18.314 [INFO][6039] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Jul 6 23:57:18.321430 containerd[1675]: time="2025-07-06T23:57:18.320130732Z" level=info msg="TearDown network for sandbox \"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\" successfully" Jul 6 23:57:18.321430 containerd[1675]: time="2025-07-06T23:57:18.320166933Z" level=info msg="StopPodSandbox for \"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\" returns successfully" Jul 6 23:57:18.321430 containerd[1675]: time="2025-07-06T23:57:18.320813249Z" level=info msg="RemovePodSandbox for \"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\"" Jul 6 23:57:18.321430 containerd[1675]: time="2025-07-06T23:57:18.320850650Z" level=info msg="Forcibly stopping sandbox \"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\"" Jul 6 23:57:18.608094 containerd[1675]: 2025-07-06 23:57:18.457 [WARNING][6060] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"f7ba973d-dc0d-426a-8adc-f92cde7b6fed", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"ff20babc654141e8929d259646617bca372e6cf7bd1207e2568a47e740621026", Pod:"goldmane-768f4c5c69-8nwl8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.72.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali76a5ae9f151", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:18.608094 containerd[1675]: 2025-07-06 23:57:18.458 [INFO][6060] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Jul 6 23:57:18.608094 containerd[1675]: 2025-07-06 23:57:18.458 [INFO][6060] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" iface="eth0" netns="" Jul 6 23:57:18.608094 containerd[1675]: 2025-07-06 23:57:18.458 [INFO][6060] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Jul 6 23:57:18.608094 containerd[1675]: 2025-07-06 23:57:18.458 [INFO][6060] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Jul 6 23:57:18.608094 containerd[1675]: 2025-07-06 23:57:18.563 [INFO][6067] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" HandleID="k8s-pod-network.7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0" Jul 6 23:57:18.608094 containerd[1675]: 2025-07-06 23:57:18.563 [INFO][6067] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:18.608094 containerd[1675]: 2025-07-06 23:57:18.563 [INFO][6067] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:18.608094 containerd[1675]: 2025-07-06 23:57:18.596 [WARNING][6067] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" HandleID="k8s-pod-network.7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0" Jul 6 23:57:18.608094 containerd[1675]: 2025-07-06 23:57:18.596 [INFO][6067] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" HandleID="k8s-pod-network.7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-goldmane--768f4c5c69--8nwl8-eth0" Jul 6 23:57:18.608094 containerd[1675]: 2025-07-06 23:57:18.599 [INFO][6067] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:18.608094 containerd[1675]: 2025-07-06 23:57:18.601 [INFO][6060] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782" Jul 6 23:57:18.608094 containerd[1675]: time="2025-07-06T23:57:18.607703118Z" level=info msg="TearDown network for sandbox \"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\" successfully" Jul 6 23:57:18.620894 containerd[1675]: time="2025-07-06T23:57:18.620855647Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:18.621076 containerd[1675]: time="2025-07-06T23:57:18.621057152Z" level=info msg="RemovePodSandbox \"7dd460b959c9019d928db289eff9cde2774bbe950cee49b4addc7c0a09179782\" returns successfully" Jul 6 23:57:18.621669 containerd[1675]: time="2025-07-06T23:57:18.621646167Z" level=info msg="StopPodSandbox for \"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\"" Jul 6 23:57:18.732551 containerd[1675]: time="2025-07-06T23:57:18.732503837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:18.737111 containerd[1675]: time="2025-07-06T23:57:18.737057651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 6 23:57:18.739958 containerd[1675]: time="2025-07-06T23:57:18.739570513Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:18.748966 containerd[1675]: time="2025-07-06T23:57:18.748934547Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:18.752407 containerd[1675]: time="2025-07-06T23:57:18.752371833Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.099417461s" Jul 6 23:57:18.754831 containerd[1675]: time="2025-07-06T23:57:18.754670391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 6 23:57:18.762816 containerd[1675]: time="2025-07-06T23:57:18.762586589Z" level=info msg="CreateContainer within sandbox \"beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 6 23:57:18.786263 containerd[1675]: 2025-07-06 23:57:18.712 [WARNING][6081] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0", GenerateName:"calico-apiserver-5b4985b7cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d596fb90-2a5b-4b75-b0f5-1553ebaf2652", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b4985b7cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644", Pod:"calico-apiserver-5b4985b7cd-r7qrd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadf61eb0654", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:18.786263 containerd[1675]: 2025-07-06 23:57:18.713 [INFO][6081] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Jul 6 23:57:18.786263 containerd[1675]: 2025-07-06 23:57:18.713 [INFO][6081] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" iface="eth0" netns="" Jul 6 23:57:18.786263 containerd[1675]: 2025-07-06 23:57:18.713 [INFO][6081] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Jul 6 23:57:18.786263 containerd[1675]: 2025-07-06 23:57:18.713 [INFO][6081] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Jul 6 23:57:18.786263 containerd[1675]: 2025-07-06 23:57:18.769 [INFO][6090] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" HandleID="k8s-pod-network.0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0" Jul 6 23:57:18.786263 containerd[1675]: 2025-07-06 23:57:18.770 [INFO][6090] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:18.786263 containerd[1675]: 2025-07-06 23:57:18.770 [INFO][6090] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:18.786263 containerd[1675]: 2025-07-06 23:57:18.779 [WARNING][6090] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" HandleID="k8s-pod-network.0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0" Jul 6 23:57:18.786263 containerd[1675]: 2025-07-06 23:57:18.779 [INFO][6090] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" HandleID="k8s-pod-network.0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0" Jul 6 23:57:18.786263 containerd[1675]: 2025-07-06 23:57:18.782 [INFO][6090] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:18.786263 containerd[1675]: 2025-07-06 23:57:18.784 [INFO][6081] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Jul 6 23:57:18.786263 containerd[1675]: time="2025-07-06T23:57:18.786111776Z" level=info msg="TearDown network for sandbox \"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\" successfully" Jul 6 23:57:18.786263 containerd[1675]: time="2025-07-06T23:57:18.786143177Z" level=info msg="StopPodSandbox for \"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\" returns successfully" Jul 6 23:57:18.788119 containerd[1675]: time="2025-07-06T23:57:18.787561413Z" level=info msg="RemovePodSandbox for \"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\"" Jul 6 23:57:18.788119 containerd[1675]: time="2025-07-06T23:57:18.787595213Z" level=info msg="Forcibly stopping sandbox \"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\"" Jul 6 23:57:18.809856 containerd[1675]: time="2025-07-06T23:57:18.809791368Z" level=info msg="CreateContainer within sandbox \"beb4b7c0abdd376a682ad7329f4b1d756536feb55ebcf40d78fcead11ba29b2f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"117dc053721d28bac1c84d084ca5d43ae8b5a28ad44bbdbcdc9ee2ef621fe8a3\"" Jul 6 23:57:18.811200 containerd[1675]: time="2025-07-06T23:57:18.811127801Z" level=info msg="StartContainer for \"117dc053721d28bac1c84d084ca5d43ae8b5a28ad44bbdbcdc9ee2ef621fe8a3\"" Jul 6 23:57:18.905198 systemd[1]: Started cri-containerd-117dc053721d28bac1c84d084ca5d43ae8b5a28ad44bbdbcdc9ee2ef621fe8a3.scope - libcontainer container 117dc053721d28bac1c84d084ca5d43ae8b5a28ad44bbdbcdc9ee2ef621fe8a3. Jul 6 23:57:19.014052 containerd[1675]: 2025-07-06 23:57:18.955 [WARNING][6105] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0", GenerateName:"calico-apiserver-5b4985b7cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d596fb90-2a5b-4b75-b0f5-1553ebaf2652", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b4985b7cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"127a2091dae38edd5147903fc400e76cfcb1ab7651b2d0bc3499b2353794b644", Pod:"calico-apiserver-5b4985b7cd-r7qrd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadf61eb0654", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:19.014052 containerd[1675]: 2025-07-06 23:57:18.956 [INFO][6105] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Jul 6 23:57:19.014052 containerd[1675]: 2025-07-06 23:57:18.956 [INFO][6105] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" iface="eth0" netns="" Jul 6 23:57:19.014052 containerd[1675]: 2025-07-06 23:57:18.956 [INFO][6105] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Jul 6 23:57:19.014052 containerd[1675]: 2025-07-06 23:57:18.956 [INFO][6105] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Jul 6 23:57:19.014052 containerd[1675]: 2025-07-06 23:57:18.988 [INFO][6157] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" HandleID="k8s-pod-network.0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0" Jul 6 23:57:19.014052 containerd[1675]: 2025-07-06 23:57:18.988 [INFO][6157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:19.014052 containerd[1675]: 2025-07-06 23:57:18.989 [INFO][6157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:19.014052 containerd[1675]: 2025-07-06 23:57:18.998 [WARNING][6157] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" HandleID="k8s-pod-network.0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0" Jul 6 23:57:19.014052 containerd[1675]: 2025-07-06 23:57:18.998 [INFO][6157] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" HandleID="k8s-pod-network.0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--r7qrd-eth0" Jul 6 23:57:19.014052 containerd[1675]: 2025-07-06 23:57:19.003 [INFO][6157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:19.014052 containerd[1675]: 2025-07-06 23:57:19.004 [INFO][6105] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94" Jul 6 23:57:19.014052 containerd[1675]: time="2025-07-06T23:57:19.013160450Z" level=info msg="TearDown network for sandbox \"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\" successfully" Jul 6 23:57:19.023889 containerd[1675]: time="2025-07-06T23:57:19.023845217Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:19.024172 containerd[1675]: time="2025-07-06T23:57:19.024146124Z" level=info msg="RemovePodSandbox \"0595192a2e630d1651a71a57ec9345e9d67758ab61b7f9f612750d6456548b94\" returns successfully" Jul 6 23:57:19.025384 containerd[1675]: time="2025-07-06T23:57:19.025046347Z" level=info msg="StopPodSandbox for \"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\"" Jul 6 23:57:19.032383 containerd[1675]: time="2025-07-06T23:57:19.032351229Z" level=info msg="StartContainer for \"117dc053721d28bac1c84d084ca5d43ae8b5a28ad44bbdbcdc9ee2ef621fe8a3\" returns successfully" Jul 6 23:57:19.123064 containerd[1675]: 2025-07-06 23:57:19.077 [WARNING][6180] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0", GenerateName:"calico-apiserver-5b4985b7cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"315fadc6-402d-4c42-a716-cdde0ac33312", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b4985b7cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6", Pod:"calico-apiserver-5b4985b7cd-b2wsx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali11c8c76ac1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:19.123064 containerd[1675]: 2025-07-06 23:57:19.077 [INFO][6180] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Jul 6 23:57:19.123064 containerd[1675]: 2025-07-06 23:57:19.077 [INFO][6180] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" iface="eth0" netns="" Jul 6 23:57:19.123064 containerd[1675]: 2025-07-06 23:57:19.077 [INFO][6180] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Jul 6 23:57:19.123064 containerd[1675]: 2025-07-06 23:57:19.077 [INFO][6180] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Jul 6 23:57:19.123064 containerd[1675]: 2025-07-06 23:57:19.111 [INFO][6190] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" HandleID="k8s-pod-network.c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0" Jul 6 23:57:19.123064 containerd[1675]: 2025-07-06 23:57:19.111 [INFO][6190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:19.123064 containerd[1675]: 2025-07-06 23:57:19.112 [INFO][6190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:19.123064 containerd[1675]: 2025-07-06 23:57:19.118 [WARNING][6190] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" HandleID="k8s-pod-network.c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0" Jul 6 23:57:19.123064 containerd[1675]: 2025-07-06 23:57:19.118 [INFO][6190] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" HandleID="k8s-pod-network.c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0" Jul 6 23:57:19.123064 containerd[1675]: 2025-07-06 23:57:19.119 [INFO][6190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:19.123064 containerd[1675]: 2025-07-06 23:57:19.121 [INFO][6180] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Jul 6 23:57:19.124565 containerd[1675]: time="2025-07-06T23:57:19.123814615Z" level=info msg="TearDown network for sandbox \"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\" successfully" Jul 6 23:57:19.124565 containerd[1675]: time="2025-07-06T23:57:19.123854516Z" level=info msg="StopPodSandbox for \"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\" returns successfully" Jul 6 23:57:19.125176 containerd[1675]: time="2025-07-06T23:57:19.124784139Z" level=info msg="RemovePodSandbox for \"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\"" Jul 6 23:57:19.125176 containerd[1675]: time="2025-07-06T23:57:19.124821440Z" level=info msg="Forcibly stopping sandbox \"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\"" Jul 6 23:57:19.228030 containerd[1675]: 2025-07-06 23:57:19.176 [WARNING][6206] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0", GenerateName:"calico-apiserver-5b4985b7cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"315fadc6-402d-4c42-a716-cdde0ac33312", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b4985b7cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2f8c6d8615", ContainerID:"a1c067147bdba66d7ee4152611294c4a3251a3b3a85e59a74d076790a42974e6", Pod:"calico-apiserver-5b4985b7cd-b2wsx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali11c8c76ac1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:57:19.228030 containerd[1675]: 2025-07-06 23:57:19.177 [INFO][6206] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Jul 6 23:57:19.228030 containerd[1675]: 2025-07-06 23:57:19.177 [INFO][6206] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" iface="eth0" netns="" Jul 6 23:57:19.228030 containerd[1675]: 2025-07-06 23:57:19.177 [INFO][6206] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Jul 6 23:57:19.228030 containerd[1675]: 2025-07-06 23:57:19.177 [INFO][6206] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Jul 6 23:57:19.228030 containerd[1675]: 2025-07-06 23:57:19.210 [INFO][6215] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" HandleID="k8s-pod-network.c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0" Jul 6 23:57:19.228030 containerd[1675]: 2025-07-06 23:57:19.210 [INFO][6215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:57:19.228030 containerd[1675]: 2025-07-06 23:57:19.210 [INFO][6215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:57:19.228030 containerd[1675]: 2025-07-06 23:57:19.220 [WARNING][6215] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" HandleID="k8s-pod-network.c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0" Jul 6 23:57:19.228030 containerd[1675]: 2025-07-06 23:57:19.220 [INFO][6215] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" HandleID="k8s-pod-network.c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Workload="ci--4081.3.4--a--2f8c6d8615-k8s-calico--apiserver--5b4985b7cd--b2wsx-eth0" Jul 6 23:57:19.228030 containerd[1675]: 2025-07-06 23:57:19.223 [INFO][6215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:57:19.228030 containerd[1675]: 2025-07-06 23:57:19.226 [INFO][6206] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341" Jul 6 23:57:19.231221 containerd[1675]: time="2025-07-06T23:57:19.230189873Z" level=info msg="TearDown network for sandbox \"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\" successfully" Jul 6 23:57:19.241169 containerd[1675]: time="2025-07-06T23:57:19.239823714Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:57:19.241169 containerd[1675]: time="2025-07-06T23:57:19.239940817Z" level=info msg="RemovePodSandbox \"c868aa299150722e189c029c5073bd7fbb044da04125c09b40047d416337f341\" returns successfully" Jul 6 23:57:19.507328 kubelet[3121]: I0706 23:57:19.506453 3121 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 6 23:57:19.507328 kubelet[3121]: I0706 23:57:19.506490 3121 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 6 23:57:19.813214 systemd[1]: run-containerd-runc-k8s.io-a6e13348983a071219476df7f98a86d4d77426df4798a1e77e2a7977dfefd4a0-runc.c2Dsw9.mount: Deactivated successfully. Jul 6 23:57:31.067391 systemd[1]: run-containerd-runc-k8s.io-f67949894e6be477a2d3f53a9a4c87f253c3066a507c60d9f3d97490414b43a1-runc.xUkOF2.mount: Deactivated successfully. Jul 6 23:57:40.784366 kubelet[3121]: I0706 23:57:40.783848 3121 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:57:40.824283 kubelet[3121]: I0706 23:57:40.824138 3121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ldbbw" podStartSLOduration=48.310219548 podStartE2EDuration="1m6.824116482s" podCreationTimestamp="2025-07-06 23:56:34 +0000 UTC" firstStartedPulling="2025-07-06 23:57:00.24372223 +0000 UTC m=+44.040945190" lastFinishedPulling="2025-07-06 23:57:18.757619164 +0000 UTC m=+62.554842124" observedRunningTime="2025-07-06 23:57:19.816381821 +0000 UTC m=+63.613604681" watchObservedRunningTime="2025-07-06 23:57:40.824116482 +0000 UTC m=+84.621339442" Jul 6 23:57:49.796148 systemd[1]: run-containerd-runc-k8s.io-a6e13348983a071219476df7f98a86d4d77426df4798a1e77e2a7977dfefd4a0-runc.gI0PcE.mount: Deactivated successfully. Jul 6 23:58:05.018337 systemd[1]: Started sshd@7-10.200.8.12:22-10.200.16.10:44326.service - OpenSSH per-connection server daemon (10.200.16.10:44326). Jul 6 23:58:05.642708 sshd[6372]: Accepted publickey for core from 10.200.16.10 port 44326 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:05.644466 sshd[6372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:05.648531 systemd-logind[1657]: New session 10 of user core. Jul 6 23:58:05.656199 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:58:06.166755 sshd[6372]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:06.170885 systemd[1]: sshd@7-10.200.8.12:22-10.200.16.10:44326.service: Deactivated successfully. Jul 6 23:58:06.173634 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:58:06.174501 systemd-logind[1657]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:58:06.175590 systemd-logind[1657]: Removed session 10. Jul 6 23:58:08.730384 systemd[1]: run-containerd-runc-k8s.io-c3f022c0652b05a9565988f51eda93a5b02cb4c08eb93f206a8869477503648e-runc.7kkZj8.mount: Deactivated successfully. Jul 6 23:58:10.871976 systemd[1]: run-containerd-runc-k8s.io-a6e13348983a071219476df7f98a86d4d77426df4798a1e77e2a7977dfefd4a0-runc.DPmfw0.mount: Deactivated successfully. Jul 6 23:58:11.285355 systemd[1]: Started sshd@8-10.200.8.12:22-10.200.16.10:34070.service - OpenSSH per-connection server daemon (10.200.16.10:34070). Jul 6 23:58:11.923983 sshd[6433]: Accepted publickey for core from 10.200.16.10 port 34070 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:11.925647 sshd[6433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:11.930007 systemd-logind[1657]: New session 11 of user core. Jul 6 23:58:11.936223 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:58:12.467419 sshd[6433]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:12.471486 systemd-logind[1657]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:58:12.473513 systemd[1]: sshd@8-10.200.8.12:22-10.200.16.10:34070.service: Deactivated successfully. Jul 6 23:58:12.476658 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:58:12.481409 systemd-logind[1657]: Removed session 11. Jul 6 23:58:17.583332 systemd[1]: Started sshd@9-10.200.8.12:22-10.200.16.10:34072.service - OpenSSH per-connection server daemon (10.200.16.10:34072). Jul 6 23:58:18.202741 sshd[6455]: Accepted publickey for core from 10.200.16.10 port 34072 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:18.204362 sshd[6455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:18.208979 systemd-logind[1657]: New session 12 of user core. Jul 6 23:58:18.216180 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:58:18.702718 sshd[6455]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:18.706292 systemd[1]: sshd@9-10.200.8.12:22-10.200.16.10:34072.service: Deactivated successfully. Jul 6 23:58:18.708928 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:58:18.711260 systemd-logind[1657]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:58:18.712849 systemd-logind[1657]: Removed session 12. Jul 6 23:58:18.823339 systemd[1]: Started sshd@10-10.200.8.12:22-10.200.16.10:34078.service - OpenSSH per-connection server daemon (10.200.16.10:34078). Jul 6 23:58:19.447608 sshd[6469]: Accepted publickey for core from 10.200.16.10 port 34078 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:19.449289 sshd[6469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:19.454481 systemd-logind[1657]: New session 13 of user core. Jul 6 23:58:19.458263 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:58:20.040494 sshd[6469]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:20.044498 systemd[1]: sshd@10-10.200.8.12:22-10.200.16.10:34078.service: Deactivated successfully. Jul 6 23:58:20.046664 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:58:20.047637 systemd-logind[1657]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:58:20.048751 systemd-logind[1657]: Removed session 13. Jul 6 23:58:20.159261 systemd[1]: Started sshd@11-10.200.8.12:22-10.200.16.10:55922.service - OpenSSH per-connection server daemon (10.200.16.10:55922). Jul 6 23:58:20.789367 sshd[6500]: Accepted publickey for core from 10.200.16.10 port 55922 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:20.791033 sshd[6500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:20.795595 systemd-logind[1657]: New session 14 of user core. Jul 6 23:58:20.801182 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:58:21.291298 sshd[6500]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:21.295365 systemd[1]: sshd@11-10.200.8.12:22-10.200.16.10:55922.service: Deactivated successfully. Jul 6 23:58:21.297492 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:58:21.298268 systemd-logind[1657]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:58:21.299257 systemd-logind[1657]: Removed session 14. Jul 6 23:58:26.408278 systemd[1]: Started sshd@12-10.200.8.12:22-10.200.16.10:55932.service - OpenSSH per-connection server daemon (10.200.16.10:55932). Jul 6 23:58:27.035429 sshd[6521]: Accepted publickey for core from 10.200.16.10 port 55932 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:27.037075 sshd[6521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:27.041111 systemd-logind[1657]: New session 15 of user core. Jul 6 23:58:27.048200 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:58:27.539497 sshd[6521]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:27.543106 systemd[1]: sshd@12-10.200.8.12:22-10.200.16.10:55932.service: Deactivated successfully. Jul 6 23:58:27.545400 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:58:27.547340 systemd-logind[1657]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:58:27.548254 systemd-logind[1657]: Removed session 15. Jul 6 23:58:32.657342 systemd[1]: Started sshd@13-10.200.8.12:22-10.200.16.10:37082.service - OpenSSH per-connection server daemon (10.200.16.10:37082). Jul 6 23:58:33.278978 sshd[6575]: Accepted publickey for core from 10.200.16.10 port 37082 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:33.281013 sshd[6575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:33.286090 systemd-logind[1657]: New session 16 of user core. Jul 6 23:58:33.291201 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:58:33.782125 sshd[6575]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:33.785574 systemd[1]: sshd@13-10.200.8.12:22-10.200.16.10:37082.service: Deactivated successfully. Jul 6 23:58:33.787814 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:58:33.789286 systemd-logind[1657]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:58:33.790781 systemd-logind[1657]: Removed session 16. Jul 6 23:58:38.709456 systemd[1]: run-containerd-runc-k8s.io-c3f022c0652b05a9565988f51eda93a5b02cb4c08eb93f206a8869477503648e-runc.YdHUQd.mount: Deactivated successfully. Jul 6 23:58:38.896453 systemd[1]: Started sshd@14-10.200.8.12:22-10.200.16.10:37098.service - OpenSSH per-connection server daemon (10.200.16.10:37098). Jul 6 23:58:39.527600 sshd[6607]: Accepted publickey for core from 10.200.16.10 port 37098 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:39.529230 sshd[6607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:39.533305 systemd-logind[1657]: New session 17 of user core. Jul 6 23:58:39.539724 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:58:40.036293 sshd[6607]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:40.041535 systemd[1]: sshd@14-10.200.8.12:22-10.200.16.10:37098.service: Deactivated successfully. Jul 6 23:58:40.044445 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:58:40.049834 systemd-logind[1657]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:58:40.052078 systemd-logind[1657]: Removed session 17. Jul 6 23:58:40.154834 systemd[1]: Started sshd@15-10.200.8.12:22-10.200.16.10:51472.service - OpenSSH per-connection server daemon (10.200.16.10:51472). Jul 6 23:58:40.806497 sshd[6620]: Accepted publickey for core from 10.200.16.10 port 51472 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:40.807596 sshd[6620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:40.816503 systemd-logind[1657]: New session 18 of user core. Jul 6 23:58:40.821531 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:58:41.438356 sshd[6620]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:41.443562 systemd-logind[1657]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:58:41.445936 systemd[1]: sshd@15-10.200.8.12:22-10.200.16.10:51472.service: Deactivated successfully. Jul 6 23:58:41.450007 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:58:41.452336 systemd-logind[1657]: Removed session 18. Jul 6 23:58:41.555333 systemd[1]: Started sshd@16-10.200.8.12:22-10.200.16.10:51478.service - OpenSSH per-connection server daemon (10.200.16.10:51478). Jul 6 23:58:42.190733 sshd[6631]: Accepted publickey for core from 10.200.16.10 port 51478 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:42.193085 sshd[6631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:42.203061 systemd-logind[1657]: New session 19 of user core. Jul 6 23:58:42.206452 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:58:43.509189 sshd[6631]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:43.514081 systemd[1]: sshd@16-10.200.8.12:22-10.200.16.10:51478.service: Deactivated successfully. Jul 6 23:58:43.516794 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:58:43.517761 systemd-logind[1657]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:58:43.519077 systemd-logind[1657]: Removed session 19. Jul 6 23:58:43.625337 systemd[1]: Started sshd@17-10.200.8.12:22-10.200.16.10:51480.service - OpenSSH per-connection server daemon (10.200.16.10:51480). Jul 6 23:58:44.247646 sshd[6649]: Accepted publickey for core from 10.200.16.10 port 51480 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:44.249220 sshd[6649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:44.253858 systemd-logind[1657]: New session 20 of user core. Jul 6 23:58:44.257180 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:58:44.855692 sshd[6649]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:44.859740 systemd[1]: sshd@17-10.200.8.12:22-10.200.16.10:51480.service: Deactivated successfully. Jul 6 23:58:44.862237 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:58:44.863146 systemd-logind[1657]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:58:44.864232 systemd-logind[1657]: Removed session 20. Jul 6 23:58:44.978082 systemd[1]: Started sshd@18-10.200.8.12:22-10.200.16.10:51490.service - OpenSSH per-connection server daemon (10.200.16.10:51490). Jul 6 23:58:45.603836 sshd[6659]: Accepted publickey for core from 10.200.16.10 port 51490 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:45.605479 sshd[6659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:45.609433 systemd-logind[1657]: New session 21 of user core. Jul 6 23:58:45.619201 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:58:46.105875 sshd[6659]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:46.109303 systemd[1]: sshd@18-10.200.8.12:22-10.200.16.10:51490.service: Deactivated successfully. Jul 6 23:58:46.111657 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:58:46.113913 systemd-logind[1657]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:58:46.115209 systemd-logind[1657]: Removed session 21. Jul 6 23:58:51.217227 systemd[1]: Started sshd@19-10.200.8.12:22-10.200.16.10:38206.service - OpenSSH per-connection server daemon (10.200.16.10:38206). Jul 6 23:58:51.842553 sshd[6714]: Accepted publickey for core from 10.200.16.10 port 38206 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:51.844229 sshd[6714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:51.849540 systemd-logind[1657]: New session 22 of user core. Jul 6 23:58:51.854187 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:58:52.341695 sshd[6714]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:52.345043 systemd[1]: sshd@19-10.200.8.12:22-10.200.16.10:38206.service: Deactivated successfully. Jul 6 23:58:52.347387 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:58:52.348975 systemd-logind[1657]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:58:52.350431 systemd-logind[1657]: Removed session 22. Jul 6 23:58:57.457344 systemd[1]: Started sshd@20-10.200.8.12:22-10.200.16.10:38214.service - OpenSSH per-connection server daemon (10.200.16.10:38214). Jul 6 23:58:58.079959 sshd[6728]: Accepted publickey for core from 10.200.16.10 port 38214 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:58:58.081772 sshd[6728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:58.086864 systemd-logind[1657]: New session 23 of user core. Jul 6 23:58:58.093204 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:58:58.577796 sshd[6728]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:58.580900 systemd[1]: sshd@20-10.200.8.12:22-10.200.16.10:38214.service: Deactivated successfully. Jul 6 23:58:58.583283 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:58:58.585010 systemd-logind[1657]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:58:58.586162 systemd-logind[1657]: Removed session 23. Jul 6 23:59:03.698313 systemd[1]: Started sshd@21-10.200.8.12:22-10.200.16.10:50312.service - OpenSSH per-connection server daemon (10.200.16.10:50312). Jul 6 23:59:04.323549 sshd[6763]: Accepted publickey for core from 10.200.16.10 port 50312 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:59:04.325126 sshd[6763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:04.329968 systemd-logind[1657]: New session 24 of user core. Jul 6 23:59:04.337206 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:59:04.820855 sshd[6763]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:04.825385 systemd[1]: sshd@21-10.200.8.12:22-10.200.16.10:50312.service: Deactivated successfully. Jul 6 23:59:04.827938 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:59:04.829336 systemd-logind[1657]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:59:04.831242 systemd-logind[1657]: Removed session 24. Jul 6 23:59:09.940333 systemd[1]: Started sshd@22-10.200.8.12:22-10.200.16.10:57312.service - OpenSSH per-connection server daemon (10.200.16.10:57312). Jul 6 23:59:10.564895 sshd[6794]: Accepted publickey for core from 10.200.16.10 port 57312 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:59:10.566569 sshd[6794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:10.571909 systemd-logind[1657]: New session 25 of user core. Jul 6 23:59:10.579176 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:59:10.860417 systemd[1]: run-containerd-runc-k8s.io-a6e13348983a071219476df7f98a86d4d77426df4798a1e77e2a7977dfefd4a0-runc.6eAFJ0.mount: Deactivated successfully. Jul 6 23:59:11.080212 sshd[6794]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:11.083430 systemd[1]: sshd@22-10.200.8.12:22-10.200.16.10:57312.service: Deactivated successfully. Jul 6 23:59:11.085778 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:59:11.087486 systemd-logind[1657]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:59:11.088815 systemd-logind[1657]: Removed session 25. Jul 6 23:59:16.196366 systemd[1]: Started sshd@23-10.200.8.12:22-10.200.16.10:57320.service - OpenSSH per-connection server daemon (10.200.16.10:57320). Jul 6 23:59:16.819886 sshd[6828]: Accepted publickey for core from 10.200.16.10 port 57320 ssh2: RSA SHA256:QmI8F31TDdpIeWklR58b451193Y1OWr2GSIDbn8x2cc Jul 6 23:59:16.821712 sshd[6828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:16.826610 systemd-logind[1657]: New session 26 of user core. Jul 6 23:59:16.831218 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:59:17.321453 sshd[6828]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:17.324373 systemd[1]: sshd@23-10.200.8.12:22-10.200.16.10:57320.service: Deactivated successfully. Jul 6 23:59:17.326755 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:59:17.328625 systemd-logind[1657]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:59:17.329592 systemd-logind[1657]: Removed session 26.