Aug 13 07:14:35.091925 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:14:35.091955 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:14:35.091964 kernel: BIOS-provided physical RAM map: Aug 13 07:14:35.091973 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 07:14:35.091979 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Aug 13 07:14:35.091986 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Aug 13 07:14:35.091996 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Aug 13 07:14:35.092005 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Aug 13 07:14:35.092012 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Aug 13 07:14:35.092020 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Aug 13 07:14:35.092027 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Aug 13 07:14:35.092033 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Aug 13 07:14:35.092042 kernel: printk: bootconsole [earlyser0] enabled Aug 13 07:14:35.092048 kernel: NX (Execute Disable) protection: active Aug 13 07:14:35.092061 kernel: APIC: Static calls initialized Aug 13 07:14:35.092068 kernel: efi: EFI v2.7 by Microsoft Aug 13 07:14:35.092076 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Aug 13 07:14:35.092086 kernel: SMBIOS 3.1.0 present. Aug 13 07:14:35.092093 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Aug 13 07:14:35.092101 kernel: Hypervisor detected: Microsoft Hyper-V Aug 13 07:14:35.092110 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Aug 13 07:14:35.092117 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Aug 13 07:14:35.092126 kernel: Hyper-V: Nested features: 0x1e0101 Aug 13 07:14:35.092134 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Aug 13 07:14:35.092143 kernel: Hyper-V: Using hypercall for remote TLB flush Aug 13 07:14:35.092160 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 13 07:14:35.092170 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 13 07:14:35.092178 kernel: tsc: Marking TSC unstable due to running on Hyper-V Aug 13 07:14:35.092188 kernel: tsc: Detected 2593.908 MHz processor Aug 13 07:14:35.092198 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:14:35.092206 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:14:35.092215 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Aug 13 07:14:35.092225 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 13 07:14:35.092235 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:14:35.092244 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Aug 13 07:14:35.092252 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Aug 13 07:14:35.092259 kernel: Using GB pages for direct mapping Aug 13 07:14:35.092268 kernel: Secure boot disabled Aug 13 07:14:35.092276 kernel: ACPI: Early table checksum verification disabled Aug 13 07:14:35.092294 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Aug 13 07:14:35.092306 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:14:35.092318 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:14:35.092326 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Aug 13 07:14:35.092336 kernel: ACPI: FACS 0x000000003FFFE000 000040 Aug 13 07:14:35.092344 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:14:35.092352 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:14:35.092362 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:14:35.092372 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:14:35.092383 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:14:35.092390 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:14:35.092400 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:14:35.092409 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Aug 13 07:14:35.092416 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Aug 13 07:14:35.092427 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Aug 13 07:14:35.092434 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Aug 13 07:14:35.092447 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Aug 13 07:14:35.092454 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Aug 13 07:14:35.092462 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Aug 13 07:14:35.092472 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Aug 13 07:14:35.092480 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Aug 13 07:14:35.092487 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Aug 13 07:14:35.092495 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:14:35.092503 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 07:14:35.092510 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Aug 13 07:14:35.092520 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Aug 13 07:14:35.092527 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Aug 13 07:14:35.092535 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Aug 13 07:14:35.092542 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Aug 13 07:14:35.092550 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Aug 13 07:14:35.092558 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Aug 13 07:14:35.092565 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Aug 13 07:14:35.092573 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Aug 13 07:14:35.092580 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Aug 13 07:14:35.092590 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Aug 13 07:14:35.092597 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Aug 13 07:14:35.092605 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Aug 13 07:14:35.092612 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Aug 13 07:14:35.092621 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Aug 13 07:14:35.092630 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Aug 13 07:14:35.092638 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Aug 13 07:14:35.092648 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Aug 13 07:14:35.092656 kernel: Zone ranges: Aug 13 07:14:35.092667 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:14:35.092676 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 07:14:35.092684 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Aug 13 07:14:35.092700 kernel: Movable zone start for each node Aug 13 07:14:35.092709 kernel: Early memory node ranges Aug 13 07:14:35.092718 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 07:14:35.092729 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Aug 13 07:14:35.092738 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Aug 13 07:14:35.092747 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Aug 13 07:14:35.092758 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Aug 13 07:14:35.092769 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:14:35.092776 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 07:14:35.092786 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Aug 13 07:14:35.092794 kernel: ACPI: PM-Timer IO Port: 0x408 Aug 13 07:14:35.092802 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Aug 13 07:14:35.092813 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:14:35.092820 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:14:35.092830 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:14:35.092841 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Aug 13 07:14:35.092850 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 07:14:35.092859 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Aug 13 07:14:35.092866 kernel: Booting paravirtualized kernel on Hyper-V Aug 13 07:14:35.092877 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:14:35.092885 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 07:14:35.092894 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 07:14:35.092903 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 07:14:35.092910 kernel: pcpu-alloc: [0] 0 1 Aug 13 07:14:35.092923 kernel: Hyper-V: PV spinlocks enabled Aug 13 07:14:35.092930 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 07:14:35.092941 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:14:35.092950 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:14:35.092958 kernel: random: crng init done Aug 13 07:14:35.092968 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 13 07:14:35.092975 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 07:14:35.092985 kernel: Fallback order for Node 0: 0 Aug 13 07:14:35.092996 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Aug 13 07:14:35.093014 kernel: Policy zone: Normal Aug 13 07:14:35.093025 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:14:35.093035 kernel: software IO TLB: area num 2. Aug 13 07:14:35.093043 kernel: Memory: 8077072K/8387460K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 310128K reserved, 0K cma-reserved) Aug 13 07:14:35.093054 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 07:14:35.093063 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:14:35.093073 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:14:35.093081 kernel: Dynamic Preempt: voluntary Aug 13 07:14:35.093090 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:14:35.093107 kernel: rcu: RCU event tracing is enabled. Aug 13 07:14:35.093120 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 07:14:35.093131 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:14:35.093142 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:14:35.093152 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:14:35.093162 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:14:35.093174 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 07:14:35.093182 kernel: Using NULL legacy PIC Aug 13 07:14:35.093194 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Aug 13 07:14:35.093202 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:14:35.093213 kernel: Console: colour dummy device 80x25 Aug 13 07:14:35.093221 kernel: printk: console [tty1] enabled Aug 13 07:14:35.093230 kernel: printk: console [ttyS0] enabled Aug 13 07:14:35.093240 kernel: printk: bootconsole [earlyser0] disabled Aug 13 07:14:35.093248 kernel: ACPI: Core revision 20230628 Aug 13 07:14:35.093259 kernel: Failed to register legacy timer interrupt Aug 13 07:14:35.093269 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:14:35.093286 kernel: Hyper-V: enabling crash_kexec_post_notifiers Aug 13 07:14:35.093295 kernel: Hyper-V: Using IPI hypercalls Aug 13 07:14:35.093306 kernel: APIC: send_IPI() replaced with hv_send_ipi() Aug 13 07:14:35.093314 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Aug 13 07:14:35.093325 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Aug 13 07:14:35.093333 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Aug 13 07:14:35.093344 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Aug 13 07:14:35.093353 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Aug 13 07:14:35.093365 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593908) Aug 13 07:14:35.093374 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 13 07:14:35.093383 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Aug 13 07:14:35.093393 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:14:35.093401 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:14:35.093412 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:14:35.093420 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 13 07:14:35.093430 kernel: RETBleed: Vulnerable Aug 13 07:14:35.093439 kernel: Speculative Store Bypass: Vulnerable Aug 13 07:14:35.093450 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:14:35.093460 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:14:35.093468 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:14:35.093479 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:14:35.093487 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:14:35.093497 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:14:35.093512 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 13 07:14:35.093522 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 13 07:14:35.093533 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 13 07:14:35.093543 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:14:35.093552 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Aug 13 07:14:35.093565 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Aug 13 07:14:35.093575 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Aug 13 07:14:35.093583 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Aug 13 07:14:35.093594 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:14:35.093602 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:14:35.093611 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:14:35.093621 kernel: landlock: Up and running. Aug 13 07:14:35.093629 kernel: SELinux: Initializing. Aug 13 07:14:35.093640 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:14:35.093648 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:14:35.093658 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Aug 13 07:14:35.093667 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:14:35.093680 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:14:35.093689 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:14:35.093697 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Aug 13 07:14:35.093706 kernel: signal: max sigframe size: 3632 Aug 13 07:14:35.093717 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:14:35.093725 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:14:35.093736 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:14:35.093744 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:14:35.093754 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:14:35.093765 kernel: .... node #0, CPUs: #1 Aug 13 07:14:35.093774 kernel: Transient Scheduler Attacks: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Aug 13 07:14:35.093785 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 07:14:35.093793 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:14:35.097685 kernel: smpboot: Max logical packages: 1 Aug 13 07:14:35.097710 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Aug 13 07:14:35.097727 kernel: devtmpfs: initialized Aug 13 07:14:35.097742 kernel: x86/mm: Memory block size: 128MB Aug 13 07:14:35.097764 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Aug 13 07:14:35.097779 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:14:35.097794 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 07:14:35.097809 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:14:35.097824 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:14:35.097839 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:14:35.097854 kernel: audit: type=2000 audit(1755069273.028:1): state=initialized audit_enabled=0 res=1 Aug 13 07:14:35.097868 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:14:35.097882 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:14:35.097900 kernel: cpuidle: using governor menu Aug 13 07:14:35.097914 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:14:35.097929 kernel: dca service started, version 1.12.1 Aug 13 07:14:35.097944 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Aug 13 07:14:35.097958 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:14:35.097973 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:14:35.097988 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:14:35.098002 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:14:35.098020 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:14:35.098034 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:14:35.098049 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:14:35.098064 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:14:35.098079 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:14:35.098093 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:14:35.098108 kernel: ACPI: Interpreter enabled Aug 13 07:14:35.098123 kernel: ACPI: PM: (supports S0 S5) Aug 13 07:14:35.098137 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:14:35.098152 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:14:35.098169 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 13 07:14:35.098184 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Aug 13 07:14:35.098198 kernel: iommu: Default domain type: Translated Aug 13 07:14:35.098213 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:14:35.098228 kernel: efivars: Registered efivars operations Aug 13 07:14:35.098242 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:14:35.098257 kernel: PCI: System does not support PCI Aug 13 07:14:35.098271 kernel: vgaarb: loaded Aug 13 07:14:35.098295 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Aug 13 07:14:35.098313 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:14:35.098328 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:14:35.098343 kernel: pnp: PnP ACPI init Aug 13 07:14:35.098357 kernel: pnp: PnP ACPI: found 3 devices Aug 13 07:14:35.098372 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:14:35.098387 kernel: NET: Registered PF_INET protocol family Aug 13 07:14:35.098402 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:14:35.098417 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 13 07:14:35.098432 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:14:35.098449 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 07:14:35.098464 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 13 07:14:35.098479 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 13 07:14:35.098493 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 07:14:35.098508 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 07:14:35.098523 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:14:35.098538 kernel: NET: Registered PF_XDP protocol family Aug 13 07:14:35.098553 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:14:35.098568 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 07:14:35.098585 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Aug 13 07:14:35.098600 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:14:35.098615 kernel: Initialise system trusted keyrings Aug 13 07:14:35.098629 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 13 07:14:35.098643 kernel: Key type asymmetric registered Aug 13 07:14:35.098658 kernel: Asymmetric key parser 'x509' registered Aug 13 07:14:35.098672 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:14:35.098687 kernel: io scheduler mq-deadline registered Aug 13 07:14:35.098701 kernel: io scheduler kyber registered Aug 13 07:14:35.098718 kernel: io scheduler bfq registered Aug 13 07:14:35.098733 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:14:35.098748 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:14:35.098762 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:14:35.098777 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 13 07:14:35.098792 kernel: i8042: PNP: No PS/2 controller found. Aug 13 07:14:35.098988 kernel: rtc_cmos 00:02: registered as rtc0 Aug 13 07:14:35.099114 kernel: rtc_cmos 00:02: setting system clock to 2025-08-13T07:14:34 UTC (1755069274) Aug 13 07:14:35.099230 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Aug 13 07:14:35.099247 kernel: intel_pstate: CPU model not supported Aug 13 07:14:35.099261 kernel: efifb: probing for efifb Aug 13 07:14:35.099320 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 13 07:14:35.099348 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 13 07:14:35.099366 kernel: efifb: scrolling: redraw Aug 13 07:14:35.099379 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 07:14:35.099392 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 07:14:35.099410 kernel: fb0: EFI VGA frame buffer device Aug 13 07:14:35.099425 kernel: pstore: Using crash dump compression: deflate Aug 13 07:14:35.099439 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 07:14:35.099450 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:14:35.099464 kernel: Segment Routing with IPv6 Aug 13 07:14:35.099478 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:14:35.099493 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:14:35.099507 kernel: Key type dns_resolver registered Aug 13 07:14:35.099519 kernel: IPI shorthand broadcast: enabled Aug 13 07:14:35.099535 kernel: sched_clock: Marking stable (869002800, 45003500)->(1113094200, -199087900) Aug 13 07:14:35.099549 kernel: registered taskstats version 1 Aug 13 07:14:35.099561 kernel: Loading compiled-in X.509 certificates Aug 13 07:14:35.099579 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:14:35.099598 kernel: Key type .fscrypt registered Aug 13 07:14:35.099609 kernel: Key type fscrypt-provisioning registered Aug 13 07:14:35.099621 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:14:35.099633 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:14:35.099646 kernel: ima: No architecture policies found Aug 13 07:14:35.099664 kernel: clk: Disabling unused clocks Aug 13 07:14:35.099676 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:14:35.099688 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:14:35.099701 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:14:35.099714 kernel: Run /init as init process Aug 13 07:14:35.099728 kernel: with arguments: Aug 13 07:14:35.099743 kernel: /init Aug 13 07:14:35.099755 kernel: with environment: Aug 13 07:14:35.099768 kernel: HOME=/ Aug 13 07:14:35.099786 kernel: TERM=linux Aug 13 07:14:35.099801 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:14:35.099819 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:14:35.099837 systemd[1]: Detected virtualization microsoft. Aug 13 07:14:35.099853 systemd[1]: Detected architecture x86-64. Aug 13 07:14:35.099868 systemd[1]: Running in initrd. Aug 13 07:14:35.099884 systemd[1]: No hostname configured, using default hostname. Aug 13 07:14:35.099899 systemd[1]: Hostname set to . Aug 13 07:14:35.099918 systemd[1]: Initializing machine ID from random generator. Aug 13 07:14:35.099933 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:14:35.099949 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:14:35.099965 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:14:35.099982 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:14:35.099998 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:14:35.100014 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:14:35.100030 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:14:35.100051 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:14:35.100068 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:14:35.100084 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:14:35.100100 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:14:35.100116 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:14:35.100130 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:14:35.100147 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:14:35.100162 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:14:35.100178 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:14:35.100193 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:14:35.100209 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:14:35.100225 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:14:35.100240 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:14:35.100256 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:14:35.100271 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:14:35.100322 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:14:35.100337 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:14:35.100351 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:14:35.100366 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:14:35.100381 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:14:35.100396 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:14:35.100411 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:14:35.100426 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:14:35.100468 systemd-journald[176]: Collecting audit messages is disabled. Aug 13 07:14:35.100505 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:14:35.100521 systemd-journald[176]: Journal started Aug 13 07:14:35.100555 systemd-journald[176]: Runtime Journal (/run/log/journal/261ca2fe571c4b25bc150f0e7b5664ef) is 8.0M, max 158.8M, 150.8M free. Aug 13 07:14:35.099618 systemd-modules-load[177]: Inserted module 'overlay' Aug 13 07:14:35.113578 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:14:35.124299 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:14:35.129562 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:14:35.132269 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:14:35.154310 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:14:35.155588 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:14:35.164147 kernel: Bridge firewalling registered Aug 13 07:14:35.164220 systemd-modules-load[177]: Inserted module 'br_netfilter' Aug 13 07:14:35.167488 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:14:35.177550 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:14:35.178058 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:14:35.189472 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:14:35.189870 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:14:35.206517 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:14:35.225922 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:14:35.236522 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:14:35.240461 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:14:35.257441 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:14:35.261526 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:14:35.275486 dracut-cmdline[208]: dracut-dracut-053 Aug 13 07:14:35.279805 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:14:35.275545 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:14:35.342022 systemd-resolved[215]: Positive Trust Anchors: Aug 13 07:14:35.344805 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:14:35.344863 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:14:35.369102 systemd-resolved[215]: Defaulting to hostname 'linux'. Aug 13 07:14:35.372870 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:14:35.383158 kernel: SCSI subsystem initialized Aug 13 07:14:35.375841 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:14:35.394308 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:14:35.406313 kernel: iscsi: registered transport (tcp) Aug 13 07:14:35.428599 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:14:35.428700 kernel: QLogic iSCSI HBA Driver Aug 13 07:14:35.465119 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:14:35.474433 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:14:35.503277 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:14:35.503403 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:14:35.506621 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:14:35.548316 kernel: raid6: avx512x4 gen() 18377 MB/s Aug 13 07:14:35.567297 kernel: raid6: avx512x2 gen() 18437 MB/s Aug 13 07:14:35.586312 kernel: raid6: avx512x1 gen() 18392 MB/s Aug 13 07:14:35.605303 kernel: raid6: avx2x4 gen() 18164 MB/s Aug 13 07:14:35.624298 kernel: raid6: avx2x2 gen() 18275 MB/s Aug 13 07:14:35.644358 kernel: raid6: avx2x1 gen() 13929 MB/s Aug 13 07:14:35.644416 kernel: raid6: using algorithm avx512x2 gen() 18437 MB/s Aug 13 07:14:35.666349 kernel: raid6: .... xor() 28946 MB/s, rmw enabled Aug 13 07:14:35.666404 kernel: raid6: using avx512x2 recovery algorithm Aug 13 07:14:35.689316 kernel: xor: automatically using best checksumming function avx Aug 13 07:14:35.836310 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:14:35.846379 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:14:35.855469 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:14:35.868525 systemd-udevd[395]: Using default interface naming scheme 'v255'. Aug 13 07:14:35.877252 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:14:35.888466 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:14:35.902687 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Aug 13 07:14:35.931197 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:14:35.942625 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:14:35.986429 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:14:35.999564 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:14:36.033089 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:14:36.034089 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:14:36.043597 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:14:36.046959 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:14:36.066412 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:14:36.078306 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:14:36.100148 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:14:36.119611 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:14:36.119675 kernel: AES CTR mode by8 optimization enabled Aug 13 07:14:36.122712 kernel: hv_vmbus: Vmbus version:5.2 Aug 13 07:14:36.123647 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:14:36.126769 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:14:36.133589 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:14:36.136268 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:14:36.136575 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:14:36.149570 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:14:36.166728 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:14:36.183984 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 07:14:36.184011 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 07:14:36.173202 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:14:36.195819 kernel: PTP clock support registered Aug 13 07:14:36.173657 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:14:36.200814 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:14:36.211302 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 13 07:14:36.215330 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 07:14:36.225916 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Aug 13 07:14:36.232306 kernel: hv_utils: Registering HyperV Utility Driver Aug 13 07:14:36.232392 kernel: hv_vmbus: registering driver hv_utils Aug 13 07:14:36.237092 kernel: hv_utils: Heartbeat IC version 3.0 Aug 13 07:14:36.237167 kernel: hv_utils: Shutdown IC version 3.2 Aug 13 07:14:36.239304 kernel: hv_utils: TimeSync IC version 4.0 Aug 13 07:14:36.706066 systemd-resolved[215]: Clock change detected. Flushing caches. Aug 13 07:14:36.715140 kernel: hv_vmbus: registering driver hid_hyperv Aug 13 07:14:36.715271 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Aug 13 07:14:36.723541 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 13 07:14:36.724140 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:14:36.739777 kernel: hv_vmbus: registering driver hv_storvsc Aug 13 07:14:36.739833 kernel: hv_vmbus: registering driver hv_netvsc Aug 13 07:14:36.739846 kernel: scsi host0: storvsc_host_t Aug 13 07:14:36.741607 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:14:36.756378 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Aug 13 07:14:36.756488 kernel: scsi host1: storvsc_host_t Aug 13 07:14:36.756944 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Aug 13 07:14:36.774215 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:14:36.784750 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 13 07:14:36.785118 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 07:14:36.790180 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 13 07:14:36.803752 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Aug 13 07:14:36.804046 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 13 07:14:36.810093 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 07:14:36.810389 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Aug 13 07:14:36.810570 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Aug 13 07:14:36.818163 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:14:36.821171 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 07:14:36.930965 kernel: hv_netvsc 7c1e522d-accf-7c1e-522d-accf7c1e522d eth0: VF slot 1 added Aug 13 07:14:36.938164 kernel: hv_vmbus: registering driver hv_pci Aug 13 07:14:36.942162 kernel: hv_pci 1c2d52af-53be-4214-b728-e2a5cc9b6bf0: PCI VMBus probing: Using version 0x10004 Aug 13 07:14:36.947167 kernel: hv_pci 1c2d52af-53be-4214-b728-e2a5cc9b6bf0: PCI host bridge to bus 53be:00 Aug 13 07:14:36.947359 kernel: pci_bus 53be:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Aug 13 07:14:36.950166 kernel: pci_bus 53be:00: No busn resource found for root bus, will use [bus 00-ff] Aug 13 07:14:36.956343 kernel: pci 53be:00:02.0: [15b3:1016] type 00 class 0x020000 Aug 13 07:14:36.961191 kernel: pci 53be:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 13 07:14:36.964485 kernel: pci 53be:00:02.0: enabling Extended Tags Aug 13 07:14:36.976371 kernel: pci 53be:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 53be:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Aug 13 07:14:36.985009 kernel: pci_bus 53be:00: busn_res: [bus 00-ff] end is updated to 00 Aug 13 07:14:36.985473 kernel: pci 53be:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 13 07:14:37.152549 kernel: mlx5_core 53be:00:02.0: enabling device (0000 -> 0002) Aug 13 07:14:37.156164 kernel: mlx5_core 53be:00:02.0: firmware version: 14.30.5000 Aug 13 07:14:37.397943 kernel: hv_netvsc 7c1e522d-accf-7c1e-522d-accf7c1e522d eth0: VF registering: eth1 Aug 13 07:14:37.398438 kernel: mlx5_core 53be:00:02.0 eth1: joined to eth0 Aug 13 07:14:37.405339 kernel: mlx5_core 53be:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Aug 13 07:14:37.414175 kernel: mlx5_core 53be:00:02.0 enP21438s1: renamed from eth1 Aug 13 07:14:37.427923 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Aug 13 07:14:37.530409 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (444) Aug 13 07:14:37.549559 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 13 07:14:37.565265 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Aug 13 07:14:37.580181 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (441) Aug 13 07:14:37.595501 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Aug 13 07:14:37.598914 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Aug 13 07:14:37.617426 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:14:37.632242 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:14:37.642188 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:14:37.650180 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:14:38.650212 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:14:38.650807 disk-uuid[602]: The operation has completed successfully. Aug 13 07:14:38.741223 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:14:38.741338 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:14:38.764331 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:14:38.770221 sh[715]: Success Aug 13 07:14:38.801170 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 07:14:39.175319 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:14:39.192293 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:14:39.197556 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:14:39.230343 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:14:39.230427 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:14:39.234016 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:14:39.236980 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:14:39.239498 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:14:39.654201 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:14:39.655083 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:14:39.666434 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:14:39.672000 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:14:39.689174 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:14:39.689238 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:14:39.694010 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:14:39.767129 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:14:39.778326 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:14:39.779507 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:14:39.793951 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:14:39.800171 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:14:39.808313 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:14:39.808453 systemd-networkd[889]: lo: Link UP Aug 13 07:14:39.808456 systemd-networkd[889]: lo: Gained carrier Aug 13 07:14:39.816453 systemd-networkd[889]: Enumeration completed Aug 13 07:14:39.817320 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:14:39.817496 systemd-networkd[889]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:14:39.817500 systemd-networkd[889]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:14:39.820569 systemd[1]: Reached target network.target - Network. Aug 13 07:14:39.838435 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:14:39.885181 kernel: mlx5_core 53be:00:02.0 enP21438s1: Link up Aug 13 07:14:39.916208 kernel: hv_netvsc 7c1e522d-accf-7c1e-522d-accf7c1e522d eth0: Data path switched to VF: enP21438s1 Aug 13 07:14:39.916982 systemd-networkd[889]: enP21438s1: Link UP Aug 13 07:14:39.919660 systemd-networkd[889]: eth0: Link UP Aug 13 07:14:39.921658 systemd-networkd[889]: eth0: Gained carrier Aug 13 07:14:39.921676 systemd-networkd[889]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:14:39.934384 systemd-networkd[889]: enP21438s1: Gained carrier Aug 13 07:14:39.951194 systemd-networkd[889]: eth0: DHCPv4 address 10.200.4.34/24, gateway 10.200.4.1 acquired from 168.63.129.16 Aug 13 07:14:40.920296 ignition[900]: Ignition 2.19.0 Aug 13 07:14:40.920308 ignition[900]: Stage: fetch-offline Aug 13 07:14:40.920351 ignition[900]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:14:40.920362 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:14:40.920475 ignition[900]: parsed url from cmdline: "" Aug 13 07:14:40.920480 ignition[900]: no config URL provided Aug 13 07:14:40.920487 ignition[900]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:14:40.920497 ignition[900]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:14:40.920504 ignition[900]: failed to fetch config: resource requires networking Aug 13 07:14:40.922725 ignition[900]: Ignition finished successfully Aug 13 07:14:40.941280 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:14:40.950392 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 07:14:40.964903 ignition[908]: Ignition 2.19.0 Aug 13 07:14:40.964916 ignition[908]: Stage: fetch Aug 13 07:14:40.965185 ignition[908]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:14:40.965201 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:14:40.965333 ignition[908]: parsed url from cmdline: "" Aug 13 07:14:40.965336 ignition[908]: no config URL provided Aug 13 07:14:40.965342 ignition[908]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:14:40.965350 ignition[908]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:14:40.965379 ignition[908]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 13 07:14:41.068378 ignition[908]: GET result: OK Aug 13 07:14:41.068880 ignition[908]: config has been read from IMDS userdata Aug 13 07:14:41.068986 ignition[908]: parsing config with SHA512: bc5ea631a6497609ef516ef0888c82d16cfad68a2e3172432f4e4174ab94e271b7cd8a9b95ecc23d5f43b270debb5ad228c4bf5e0a7603771abf9549f62f1064 Aug 13 07:14:41.078123 unknown[908]: fetched base config from "system" Aug 13 07:14:41.078139 unknown[908]: fetched base config from "system" Aug 13 07:14:41.079160 ignition[908]: fetch: fetch complete Aug 13 07:14:41.078159 unknown[908]: fetched user config from "azure" Aug 13 07:14:41.079175 ignition[908]: fetch: fetch passed Aug 13 07:14:41.080917 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 07:14:41.079237 ignition[908]: Ignition finished successfully Aug 13 07:14:41.091365 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:14:41.108942 ignition[914]: Ignition 2.19.0 Aug 13 07:14:41.108953 ignition[914]: Stage: kargs Aug 13 07:14:41.109212 ignition[914]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:14:41.109227 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:14:41.110136 ignition[914]: kargs: kargs passed Aug 13 07:14:41.110210 ignition[914]: Ignition finished successfully Aug 13 07:14:41.119586 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:14:41.128439 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:14:41.146955 ignition[920]: Ignition 2.19.0 Aug 13 07:14:41.146967 ignition[920]: Stage: disks Aug 13 07:14:41.149306 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:14:41.147236 ignition[920]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:14:41.153806 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:14:41.147250 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:14:41.159019 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:14:41.148205 ignition[920]: disks: disks passed Aug 13 07:14:41.162140 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:14:41.148255 ignition[920]: Ignition finished successfully Aug 13 07:14:41.184026 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:14:41.188249 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:14:41.204457 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:14:41.274493 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Aug 13 07:14:41.282048 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:14:41.295294 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:14:41.398167 kernel: EXT4-fs (sda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:14:41.398689 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:14:41.401719 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:14:41.452306 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:14:41.466173 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (940) Aug 13 07:14:41.472814 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:14:41.472903 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:14:41.476180 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:14:41.476323 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:14:41.482995 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 07:14:41.488586 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:14:41.491671 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:14:41.491722 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:14:41.505257 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:14:41.507590 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:14:41.518317 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:14:41.837375 systemd-networkd[889]: eth0: Gained IPv6LL Aug 13 07:14:42.231658 coreos-metadata[955]: Aug 13 07:14:42.231 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 07:14:42.235849 coreos-metadata[955]: Aug 13 07:14:42.235 INFO Fetch successful Aug 13 07:14:42.238219 coreos-metadata[955]: Aug 13 07:14:42.235 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 13 07:14:42.253514 coreos-metadata[955]: Aug 13 07:14:42.253 INFO Fetch successful Aug 13 07:14:42.273705 coreos-metadata[955]: Aug 13 07:14:42.273 INFO wrote hostname ci-4081.3.5-a-0c3b310332 to /sysroot/etc/hostname Aug 13 07:14:42.280882 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:14:42.468433 initrd-setup-root[969]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:14:42.521058 initrd-setup-root[976]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:14:42.540982 initrd-setup-root[983]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:14:42.575384 initrd-setup-root[990]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:14:43.563844 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:14:43.574385 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:14:43.580391 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:14:43.591792 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:14:43.598379 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:14:43.627631 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:14:43.634350 ignition[1058]: INFO : Ignition 2.19.0 Aug 13 07:14:43.634350 ignition[1058]: INFO : Stage: mount Aug 13 07:14:43.634350 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:14:43.634350 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:14:43.634350 ignition[1058]: INFO : mount: mount passed Aug 13 07:14:43.634350 ignition[1058]: INFO : Ignition finished successfully Aug 13 07:14:43.649080 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:14:43.658247 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:14:43.665300 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:14:43.686184 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1069) Aug 13 07:14:43.693047 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:14:43.693133 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:14:43.695607 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:14:43.702167 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:14:43.704160 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:14:43.729046 ignition[1086]: INFO : Ignition 2.19.0 Aug 13 07:14:43.729046 ignition[1086]: INFO : Stage: files Aug 13 07:14:43.734322 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:14:43.734322 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:14:43.734322 ignition[1086]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:14:43.775554 ignition[1086]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:14:43.775554 ignition[1086]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:14:43.886078 ignition[1086]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:14:43.891737 ignition[1086]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:14:43.899528 unknown[1086]: wrote ssh authorized keys file for user: core Aug 13 07:14:43.902958 ignition[1086]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:14:43.930681 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 07:14:43.938063 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 07:14:43.938063 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:14:43.938063 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 07:14:43.973406 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 07:14:44.067475 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:14:44.072821 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:14:44.072821 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:14:44.072821 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:14:44.072821 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:14:44.072821 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:14:44.072821 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:14:44.072821 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:14:44.072821 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:14:44.107316 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:14:44.111715 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:14:44.111715 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:14:44.111715 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:14:44.111715 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:14:44.111715 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 07:14:44.645773 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 07:14:45.657506 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:14:45.657506 ignition[1086]: INFO : files: op(c): [started] processing unit "containerd.service" Aug 13 07:14:45.687682 ignition[1086]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 07:14:45.693742 ignition[1086]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 07:14:45.693742 ignition[1086]: INFO : files: op(c): [finished] processing unit "containerd.service" Aug 13 07:14:45.693742 ignition[1086]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Aug 13 07:14:45.693742 ignition[1086]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:14:45.693742 ignition[1086]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:14:45.693742 ignition[1086]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Aug 13 07:14:45.693742 ignition[1086]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:14:45.693742 ignition[1086]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:14:45.693742 ignition[1086]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:14:45.693742 ignition[1086]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:14:45.693742 ignition[1086]: INFO : files: files passed Aug 13 07:14:45.693742 ignition[1086]: INFO : Ignition finished successfully Aug 13 07:14:45.690986 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:14:45.743446 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:14:45.752419 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:14:45.758530 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:14:45.758662 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:14:45.785677 initrd-setup-root-after-ignition[1114]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:14:45.785677 initrd-setup-root-after-ignition[1114]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:14:45.793720 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:14:45.799341 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:14:45.803131 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:14:45.819450 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:14:45.857612 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:14:45.857740 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:14:45.864532 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:14:45.873230 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:14:45.878622 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:14:45.890487 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:14:45.908028 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:14:45.920569 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:14:45.935266 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:14:45.941247 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:14:45.941460 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:14:45.942008 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:14:45.942137 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:14:45.942831 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:14:45.943366 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:14:45.943671 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:14:45.944100 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:14:45.944531 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:14:45.944947 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:14:45.945436 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:14:45.945853 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:14:45.946273 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:14:45.946674 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:14:45.947064 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:14:45.947231 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:14:45.948133 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:14:45.948580 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:14:45.948940 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:14:45.983140 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:14:45.989221 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:14:45.989406 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:14:46.041685 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:14:46.041869 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:14:46.051282 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:14:46.051456 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:14:46.058908 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 07:14:46.059110 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:14:46.079614 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:14:46.083242 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:14:46.083479 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:14:46.091396 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:14:46.098458 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:14:46.098707 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:14:46.108253 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:14:46.112743 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:14:46.120859 ignition[1138]: INFO : Ignition 2.19.0 Aug 13 07:14:46.120859 ignition[1138]: INFO : Stage: umount Aug 13 07:14:46.120859 ignition[1138]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:14:46.120859 ignition[1138]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:14:46.132911 ignition[1138]: INFO : umount: umount passed Aug 13 07:14:46.132911 ignition[1138]: INFO : Ignition finished successfully Aug 13 07:14:46.123204 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:14:46.123513 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:14:46.135159 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:14:46.135272 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:14:46.142102 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:14:46.142447 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:14:46.145177 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:14:46.145239 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:14:46.149970 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 07:14:46.150067 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 07:14:46.171805 systemd[1]: Stopped target network.target - Network. Aug 13 07:14:46.176275 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:14:46.176375 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:14:46.182137 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:14:46.187449 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:14:46.192955 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:14:46.199630 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:14:46.201966 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:14:46.204545 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:14:46.204598 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:14:46.211358 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:14:46.213471 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:14:46.222261 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:14:46.222351 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:14:46.227221 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:14:46.227289 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:14:46.232371 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:14:46.237862 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:14:46.244486 systemd-networkd[889]: eth0: DHCPv6 lease lost Aug 13 07:14:46.251431 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:14:46.252220 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:14:46.252350 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:14:46.262306 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:14:46.264561 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:14:46.273552 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:14:46.273613 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:14:46.289345 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:14:46.291869 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:14:46.291947 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:14:46.297341 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:14:46.297406 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:14:46.300459 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:14:46.300517 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:14:46.303310 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:14:46.303365 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:14:46.309002 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:14:46.344478 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:14:46.344645 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:14:46.350894 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:14:46.351009 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:14:46.356418 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:14:46.356463 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:14:46.361770 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:14:46.361832 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:14:46.367211 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:14:46.367265 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:14:46.386176 kernel: hv_netvsc 7c1e522d-accf-7c1e-522d-accf7c1e522d eth0: Data path switched from VF: enP21438s1 Aug 13 07:14:46.387614 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:14:46.387703 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:14:46.403378 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:14:46.409319 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:14:46.409413 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:14:46.415251 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:14:46.418664 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:14:46.426855 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:14:46.426999 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:14:46.431949 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:14:46.432039 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:14:46.792258 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:14:46.792397 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:14:46.795882 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:14:46.800397 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:14:46.800480 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:14:46.815487 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:14:47.301242 systemd[1]: Switching root. Aug 13 07:14:47.333115 systemd-journald[176]: Journal stopped Aug 13 07:14:35.091925 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:14:35.091955 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:14:35.091964 kernel: BIOS-provided physical RAM map: Aug 13 07:14:35.091973 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 07:14:35.091979 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Aug 13 07:14:35.091986 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Aug 13 07:14:35.091996 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Aug 13 07:14:35.092005 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Aug 13 07:14:35.092012 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Aug 13 07:14:35.092020 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Aug 13 07:14:35.092027 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Aug 13 07:14:35.092033 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Aug 13 07:14:35.092042 kernel: printk: bootconsole [earlyser0] enabled Aug 13 07:14:35.092048 kernel: NX (Execute Disable) protection: active Aug 13 07:14:35.092061 kernel: APIC: Static calls initialized Aug 13 07:14:35.092068 kernel: efi: EFI v2.7 by Microsoft Aug 13 07:14:35.092076 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Aug 13 07:14:35.092086 kernel: SMBIOS 3.1.0 present. Aug 13 07:14:35.092093 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Aug 13 07:14:35.092101 kernel: Hypervisor detected: Microsoft Hyper-V Aug 13 07:14:35.092110 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Aug 13 07:14:35.092117 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Aug 13 07:14:35.092126 kernel: Hyper-V: Nested features: 0x1e0101 Aug 13 07:14:35.092134 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Aug 13 07:14:35.092143 kernel: Hyper-V: Using hypercall for remote TLB flush Aug 13 07:14:35.092160 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 13 07:14:35.092170 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 13 07:14:35.092178 kernel: tsc: Marking TSC unstable due to running on Hyper-V Aug 13 07:14:35.092188 kernel: tsc: Detected 2593.908 MHz processor Aug 13 07:14:35.092198 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:14:35.092206 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:14:35.092215 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Aug 13 07:14:35.092225 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 13 07:14:35.092235 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:14:35.092244 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Aug 13 07:14:35.092252 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Aug 13 07:14:35.092259 kernel: Using GB pages for direct mapping Aug 13 07:14:35.092268 kernel: Secure boot disabled Aug 13 07:14:35.092276 kernel: ACPI: Early table checksum verification disabled Aug 13 07:14:35.092294 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Aug 13 07:14:35.092306 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:14:35.092318 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:14:35.092326 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Aug 13 07:14:35.092336 kernel: ACPI: FACS 0x000000003FFFE000 000040 Aug 13 07:14:35.092344 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:14:35.092352 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:14:35.092362 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:14:35.092372 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:14:35.092383 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:14:35.092390 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:14:35.092400 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:14:35.092409 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Aug 13 07:14:35.092416 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Aug 13 07:14:35.092427 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Aug 13 07:14:35.092434 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Aug 13 07:14:35.092447 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Aug 13 07:14:35.092454 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Aug 13 07:14:35.092462 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Aug 13 07:14:35.092472 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Aug 13 07:14:35.092480 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Aug 13 07:14:35.092487 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Aug 13 07:14:35.092495 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:14:35.092503 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 07:14:35.092510 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Aug 13 07:14:35.092520 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Aug 13 07:14:35.092527 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Aug 13 07:14:35.092535 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Aug 13 07:14:35.092542 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Aug 13 07:14:35.092550 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Aug 13 07:14:35.092558 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Aug 13 07:14:35.092565 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Aug 13 07:14:35.092573 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Aug 13 07:14:35.092580 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Aug 13 07:14:35.092590 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Aug 13 07:14:35.092597 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Aug 13 07:14:35.092605 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Aug 13 07:14:35.092612 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Aug 13 07:14:35.092621 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Aug 13 07:14:35.092630 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Aug 13 07:14:35.092638 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Aug 13 07:14:35.092648 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Aug 13 07:14:35.092656 kernel: Zone ranges: Aug 13 07:14:35.092667 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:14:35.092676 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 07:14:35.092684 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Aug 13 07:14:35.092700 kernel: Movable zone start for each node Aug 13 07:14:35.092709 kernel: Early memory node ranges Aug 13 07:14:35.092718 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 07:14:35.092729 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Aug 13 07:14:35.092738 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Aug 13 07:14:35.092747 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Aug 13 07:14:35.092758 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Aug 13 07:14:35.092769 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:14:35.092776 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 07:14:35.092786 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Aug 13 07:14:35.092794 kernel: ACPI: PM-Timer IO Port: 0x408 Aug 13 07:14:35.092802 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Aug 13 07:14:35.092813 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:14:35.092820 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:14:35.092830 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:14:35.092841 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Aug 13 07:14:35.092850 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 07:14:35.092859 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Aug 13 07:14:35.092866 kernel: Booting paravirtualized kernel on Hyper-V Aug 13 07:14:35.092877 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:14:35.092885 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 07:14:35.092894 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 07:14:35.092903 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 07:14:35.092910 kernel: pcpu-alloc: [0] 0 1 Aug 13 07:14:35.092923 kernel: Hyper-V: PV spinlocks enabled Aug 13 07:14:35.092930 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 07:14:35.092941 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:14:35.092950 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:14:35.092958 kernel: random: crng init done Aug 13 07:14:35.092968 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 13 07:14:35.092975 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 07:14:35.092985 kernel: Fallback order for Node 0: 0 Aug 13 07:14:35.092996 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Aug 13 07:14:35.093014 kernel: Policy zone: Normal Aug 13 07:14:35.093025 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:14:35.093035 kernel: software IO TLB: area num 2. Aug 13 07:14:35.093043 kernel: Memory: 8077072K/8387460K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 310128K reserved, 0K cma-reserved) Aug 13 07:14:35.093054 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 07:14:35.093063 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:14:35.093073 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:14:35.093081 kernel: Dynamic Preempt: voluntary Aug 13 07:14:35.093090 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:14:35.093107 kernel: rcu: RCU event tracing is enabled. Aug 13 07:14:35.093120 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 07:14:35.093131 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:14:35.093142 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:14:35.093152 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:14:35.093162 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:14:35.093174 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 07:14:35.093182 kernel: Using NULL legacy PIC Aug 13 07:14:35.093194 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Aug 13 07:14:35.093202 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:14:35.093213 kernel: Console: colour dummy device 80x25 Aug 13 07:14:35.093221 kernel: printk: console [tty1] enabled Aug 13 07:14:35.093230 kernel: printk: console [ttyS0] enabled Aug 13 07:14:35.093240 kernel: printk: bootconsole [earlyser0] disabled Aug 13 07:14:35.093248 kernel: ACPI: Core revision 20230628 Aug 13 07:14:35.093259 kernel: Failed to register legacy timer interrupt Aug 13 07:14:35.093269 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:14:35.093286 kernel: Hyper-V: enabling crash_kexec_post_notifiers Aug 13 07:14:35.093295 kernel: Hyper-V: Using IPI hypercalls Aug 13 07:14:35.093306 kernel: APIC: send_IPI() replaced with hv_send_ipi() Aug 13 07:14:35.093314 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Aug 13 07:14:35.093325 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Aug 13 07:14:35.093333 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Aug 13 07:14:35.093344 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Aug 13 07:14:35.093353 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Aug 13 07:14:35.093365 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593908) Aug 13 07:14:35.093374 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 13 07:14:35.093383 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Aug 13 07:14:35.093393 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:14:35.093401 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:14:35.093412 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:14:35.093420 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 13 07:14:35.093430 kernel: RETBleed: Vulnerable Aug 13 07:14:35.093439 kernel: Speculative Store Bypass: Vulnerable Aug 13 07:14:35.093450 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:14:35.093460 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:14:35.093468 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:14:35.093479 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:14:35.093487 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:14:35.093497 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:14:35.093512 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 13 07:14:35.093522 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 13 07:14:35.093533 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 13 07:14:35.093543 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:14:35.093552 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Aug 13 07:14:35.093565 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Aug 13 07:14:35.093575 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Aug 13 07:14:35.093583 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Aug 13 07:14:35.093594 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:14:35.093602 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:14:35.093611 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:14:35.093621 kernel: landlock: Up and running. Aug 13 07:14:35.093629 kernel: SELinux: Initializing. Aug 13 07:14:35.093640 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:14:35.093648 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:14:35.093658 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Aug 13 07:14:35.093667 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:14:35.093680 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:14:35.093689 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:14:35.093697 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Aug 13 07:14:35.093706 kernel: signal: max sigframe size: 3632 Aug 13 07:14:35.093717 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:14:35.093725 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:14:35.093736 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:14:35.093744 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:14:35.093754 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:14:35.093765 kernel: .... node #0, CPUs: #1 Aug 13 07:14:35.093774 kernel: Transient Scheduler Attacks: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Aug 13 07:14:35.093785 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 07:14:35.093793 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:14:35.097685 kernel: smpboot: Max logical packages: 1 Aug 13 07:14:35.097710 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Aug 13 07:14:35.097727 kernel: devtmpfs: initialized Aug 13 07:14:35.097742 kernel: x86/mm: Memory block size: 128MB Aug 13 07:14:35.097764 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Aug 13 07:14:35.097779 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:14:35.097794 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 07:14:35.097809 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:14:35.097824 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:14:35.097839 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:14:35.097854 kernel: audit: type=2000 audit(1755069273.028:1): state=initialized audit_enabled=0 res=1 Aug 13 07:14:35.097868 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:14:35.097882 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:14:35.097900 kernel: cpuidle: using governor menu Aug 13 07:14:35.097914 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:14:35.097929 kernel: dca service started, version 1.12.1 Aug 13 07:14:35.097944 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Aug 13 07:14:35.097958 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:14:35.097973 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:14:35.097988 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:14:35.098002 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:14:35.098020 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:14:35.098034 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:14:35.098049 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:14:35.098064 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:14:35.098079 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:14:35.098093 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:14:35.098108 kernel: ACPI: Interpreter enabled Aug 13 07:14:35.098123 kernel: ACPI: PM: (supports S0 S5) Aug 13 07:14:35.098137 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:14:35.098152 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:14:35.098169 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 13 07:14:35.098184 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Aug 13 07:14:35.098198 kernel: iommu: Default domain type: Translated Aug 13 07:14:35.098213 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:14:35.098228 kernel: efivars: Registered efivars operations Aug 13 07:14:35.098242 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:14:35.098257 kernel: PCI: System does not support PCI Aug 13 07:14:35.098271 kernel: vgaarb: loaded Aug 13 07:14:35.098295 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Aug 13 07:14:35.098313 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:14:35.098328 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:14:35.098343 kernel: pnp: PnP ACPI init Aug 13 07:14:35.098357 kernel: pnp: PnP ACPI: found 3 devices Aug 13 07:14:35.098372 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:14:35.098387 kernel: NET: Registered PF_INET protocol family Aug 13 07:14:35.098402 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:14:35.098417 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 13 07:14:35.098432 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:14:35.098449 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 07:14:35.098464 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 13 07:14:35.098479 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 13 07:14:35.098493 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 07:14:35.098508 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 07:14:35.098523 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:14:35.098538 kernel: NET: Registered PF_XDP protocol family Aug 13 07:14:35.098553 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:14:35.098568 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 07:14:35.098585 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Aug 13 07:14:35.098600 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:14:35.098615 kernel: Initialise system trusted keyrings Aug 13 07:14:35.098629 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 13 07:14:35.098643 kernel: Key type asymmetric registered Aug 13 07:14:35.098658 kernel: Asymmetric key parser 'x509' registered Aug 13 07:14:35.098672 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:14:35.098687 kernel: io scheduler mq-deadline registered Aug 13 07:14:35.098701 kernel: io scheduler kyber registered Aug 13 07:14:35.098718 kernel: io scheduler bfq registered Aug 13 07:14:35.098733 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:14:35.098748 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:14:35.098762 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:14:35.098777 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 13 07:14:35.098792 kernel: i8042: PNP: No PS/2 controller found. Aug 13 07:14:35.098988 kernel: rtc_cmos 00:02: registered as rtc0 Aug 13 07:14:35.099114 kernel: rtc_cmos 00:02: setting system clock to 2025-08-13T07:14:34 UTC (1755069274) Aug 13 07:14:35.099230 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Aug 13 07:14:35.099247 kernel: intel_pstate: CPU model not supported Aug 13 07:14:35.099261 kernel: efifb: probing for efifb Aug 13 07:14:35.099320 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 13 07:14:35.099348 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 13 07:14:35.099366 kernel: efifb: scrolling: redraw Aug 13 07:14:35.099379 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 07:14:35.099392 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 07:14:35.099410 kernel: fb0: EFI VGA frame buffer device Aug 13 07:14:35.099425 kernel: pstore: Using crash dump compression: deflate Aug 13 07:14:35.099439 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 07:14:35.099450 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:14:35.099464 kernel: Segment Routing with IPv6 Aug 13 07:14:35.099478 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:14:35.099493 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:14:35.099507 kernel: Key type dns_resolver registered Aug 13 07:14:35.099519 kernel: IPI shorthand broadcast: enabled Aug 13 07:14:35.099535 kernel: sched_clock: Marking stable (869002800, 45003500)->(1113094200, -199087900) Aug 13 07:14:35.099549 kernel: registered taskstats version 1 Aug 13 07:14:35.099561 kernel: Loading compiled-in X.509 certificates Aug 13 07:14:35.099579 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:14:35.099598 kernel: Key type .fscrypt registered Aug 13 07:14:35.099609 kernel: Key type fscrypt-provisioning registered Aug 13 07:14:35.099621 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:14:35.099633 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:14:35.099646 kernel: ima: No architecture policies found Aug 13 07:14:35.099664 kernel: clk: Disabling unused clocks Aug 13 07:14:35.099676 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:14:35.099688 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:14:35.099701 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:14:35.099714 kernel: Run /init as init process Aug 13 07:14:35.099728 kernel: with arguments: Aug 13 07:14:35.099743 kernel: /init Aug 13 07:14:35.099755 kernel: with environment: Aug 13 07:14:35.099768 kernel: HOME=/ Aug 13 07:14:35.099786 kernel: TERM=linux Aug 13 07:14:35.099801 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:14:35.099819 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:14:35.099837 systemd[1]: Detected virtualization microsoft. Aug 13 07:14:35.099853 systemd[1]: Detected architecture x86-64. Aug 13 07:14:35.099868 systemd[1]: Running in initrd. Aug 13 07:14:35.099884 systemd[1]: No hostname configured, using default hostname. Aug 13 07:14:35.099899 systemd[1]: Hostname set to . Aug 13 07:14:35.099918 systemd[1]: Initializing machine ID from random generator. Aug 13 07:14:35.099933 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:14:35.099949 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:14:35.099965 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:14:35.099982 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:14:35.099998 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:14:35.100014 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:14:35.100030 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:14:35.100051 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:14:35.100068 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:14:35.100084 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:14:35.100100 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:14:35.100116 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:14:35.100130 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:14:35.100147 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:14:35.100162 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:14:35.100178 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:14:35.100193 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:14:35.100209 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:14:35.100225 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:14:35.100240 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:14:35.100256 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:14:35.100271 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:14:35.100322 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:14:35.100337 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:14:35.100351 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:14:35.100366 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:14:35.100381 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:14:35.100396 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:14:35.100411 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:14:35.100426 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:14:35.100468 systemd-journald[176]: Collecting audit messages is disabled. Aug 13 07:14:35.100505 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:14:35.100521 systemd-journald[176]: Journal started Aug 13 07:14:35.100555 systemd-journald[176]: Runtime Journal (/run/log/journal/261ca2fe571c4b25bc150f0e7b5664ef) is 8.0M, max 158.8M, 150.8M free. Aug 13 07:14:35.099618 systemd-modules-load[177]: Inserted module 'overlay' Aug 13 07:14:35.113578 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:14:35.124299 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:14:35.129562 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:14:35.132269 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:14:35.154310 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:14:35.155588 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:14:35.164147 kernel: Bridge firewalling registered Aug 13 07:14:35.164220 systemd-modules-load[177]: Inserted module 'br_netfilter' Aug 13 07:14:35.167488 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:14:35.177550 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:14:35.178058 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:14:35.189472 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:14:35.189870 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:14:35.206517 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:14:35.225922 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:14:35.236522 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:14:35.240461 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:14:35.257441 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:14:35.261526 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:14:35.275486 dracut-cmdline[208]: dracut-dracut-053 Aug 13 07:14:35.279805 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:14:35.275545 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:14:35.342022 systemd-resolved[215]: Positive Trust Anchors: Aug 13 07:14:35.344805 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:14:35.344863 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:14:35.369102 systemd-resolved[215]: Defaulting to hostname 'linux'. Aug 13 07:14:35.372870 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:14:35.383158 kernel: SCSI subsystem initialized Aug 13 07:14:35.375841 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:14:35.394308 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:14:35.406313 kernel: iscsi: registered transport (tcp) Aug 13 07:14:35.428599 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:14:35.428700 kernel: QLogic iSCSI HBA Driver Aug 13 07:14:35.465119 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:14:35.474433 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:14:35.503277 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:14:35.503403 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:14:35.506621 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:14:35.548316 kernel: raid6: avx512x4 gen() 18377 MB/s Aug 13 07:14:35.567297 kernel: raid6: avx512x2 gen() 18437 MB/s Aug 13 07:14:35.586312 kernel: raid6: avx512x1 gen() 18392 MB/s Aug 13 07:14:35.605303 kernel: raid6: avx2x4 gen() 18164 MB/s Aug 13 07:14:35.624298 kernel: raid6: avx2x2 gen() 18275 MB/s Aug 13 07:14:35.644358 kernel: raid6: avx2x1 gen() 13929 MB/s Aug 13 07:14:35.644416 kernel: raid6: using algorithm avx512x2 gen() 18437 MB/s Aug 13 07:14:35.666349 kernel: raid6: .... xor() 28946 MB/s, rmw enabled Aug 13 07:14:35.666404 kernel: raid6: using avx512x2 recovery algorithm Aug 13 07:14:35.689316 kernel: xor: automatically using best checksumming function avx Aug 13 07:14:35.836310 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:14:35.846379 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:14:35.855469 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:14:35.868525 systemd-udevd[395]: Using default interface naming scheme 'v255'. Aug 13 07:14:35.877252 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:14:35.888466 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:14:35.902687 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Aug 13 07:14:35.931197 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:14:35.942625 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:14:35.986429 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:14:35.999564 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:14:36.033089 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:14:36.034089 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:14:36.043597 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:14:36.046959 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:14:36.066412 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:14:36.078306 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:14:36.100148 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:14:36.119611 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:14:36.119675 kernel: AES CTR mode by8 optimization enabled Aug 13 07:14:36.122712 kernel: hv_vmbus: Vmbus version:5.2 Aug 13 07:14:36.123647 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:14:36.126769 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:14:36.133589 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:14:36.136268 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:14:36.136575 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:14:36.149570 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:14:36.166728 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:14:36.183984 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 07:14:36.184011 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 07:14:36.173202 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:14:36.195819 kernel: PTP clock support registered Aug 13 07:14:36.173657 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:14:36.200814 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:14:36.211302 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 13 07:14:36.215330 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 07:14:36.225916 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Aug 13 07:14:36.232306 kernel: hv_utils: Registering HyperV Utility Driver Aug 13 07:14:36.232392 kernel: hv_vmbus: registering driver hv_utils Aug 13 07:14:36.237092 kernel: hv_utils: Heartbeat IC version 3.0 Aug 13 07:14:36.237167 kernel: hv_utils: Shutdown IC version 3.2 Aug 13 07:14:36.239304 kernel: hv_utils: TimeSync IC version 4.0 Aug 13 07:14:36.706066 systemd-resolved[215]: Clock change detected. Flushing caches. Aug 13 07:14:36.715140 kernel: hv_vmbus: registering driver hid_hyperv Aug 13 07:14:36.715271 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Aug 13 07:14:36.723541 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 13 07:14:36.724140 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:14:36.739777 kernel: hv_vmbus: registering driver hv_storvsc Aug 13 07:14:36.739833 kernel: hv_vmbus: registering driver hv_netvsc Aug 13 07:14:36.739846 kernel: scsi host0: storvsc_host_t Aug 13 07:14:36.741607 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:14:36.756378 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Aug 13 07:14:36.756488 kernel: scsi host1: storvsc_host_t Aug 13 07:14:36.756944 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Aug 13 07:14:36.774215 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:14:36.784750 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 13 07:14:36.785118 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 07:14:36.790180 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 13 07:14:36.803752 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Aug 13 07:14:36.804046 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 13 07:14:36.810093 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 07:14:36.810389 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Aug 13 07:14:36.810570 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Aug 13 07:14:36.818163 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:14:36.821171 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 07:14:36.930965 kernel: hv_netvsc 7c1e522d-accf-7c1e-522d-accf7c1e522d eth0: VF slot 1 added Aug 13 07:14:36.938164 kernel: hv_vmbus: registering driver hv_pci Aug 13 07:14:36.942162 kernel: hv_pci 1c2d52af-53be-4214-b728-e2a5cc9b6bf0: PCI VMBus probing: Using version 0x10004 Aug 13 07:14:36.947167 kernel: hv_pci 1c2d52af-53be-4214-b728-e2a5cc9b6bf0: PCI host bridge to bus 53be:00 Aug 13 07:14:36.947359 kernel: pci_bus 53be:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Aug 13 07:14:36.950166 kernel: pci_bus 53be:00: No busn resource found for root bus, will use [bus 00-ff] Aug 13 07:14:36.956343 kernel: pci 53be:00:02.0: [15b3:1016] type 00 class 0x020000 Aug 13 07:14:36.961191 kernel: pci 53be:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 13 07:14:36.964485 kernel: pci 53be:00:02.0: enabling Extended Tags Aug 13 07:14:36.976371 kernel: pci 53be:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 53be:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Aug 13 07:14:36.985009 kernel: pci_bus 53be:00: busn_res: [bus 00-ff] end is updated to 00 Aug 13 07:14:36.985473 kernel: pci 53be:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 13 07:14:37.152549 kernel: mlx5_core 53be:00:02.0: enabling device (0000 -> 0002) Aug 13 07:14:37.156164 kernel: mlx5_core 53be:00:02.0: firmware version: 14.30.5000 Aug 13 07:14:37.397943 kernel: hv_netvsc 7c1e522d-accf-7c1e-522d-accf7c1e522d eth0: VF registering: eth1 Aug 13 07:14:37.398438 kernel: mlx5_core 53be:00:02.0 eth1: joined to eth0 Aug 13 07:14:37.405339 kernel: mlx5_core 53be:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Aug 13 07:14:37.414175 kernel: mlx5_core 53be:00:02.0 enP21438s1: renamed from eth1 Aug 13 07:14:37.427923 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Aug 13 07:14:37.530409 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (444) Aug 13 07:14:37.549559 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 13 07:14:37.565265 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Aug 13 07:14:37.580181 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (441) Aug 13 07:14:37.595501 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Aug 13 07:14:37.598914 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Aug 13 07:14:37.617426 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:14:37.632242 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:14:37.642188 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:14:37.650180 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:14:38.650212 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:14:38.650807 disk-uuid[602]: The operation has completed successfully. Aug 13 07:14:38.741223 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:14:38.741338 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:14:38.764331 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:14:38.770221 sh[715]: Success Aug 13 07:14:38.801170 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 07:14:39.175319 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:14:39.192293 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:14:39.197556 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:14:39.230343 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:14:39.230427 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:14:39.234016 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:14:39.236980 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:14:39.239498 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:14:39.654201 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:14:39.655083 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:14:39.666434 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:14:39.672000 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:14:39.689174 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:14:39.689238 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:14:39.694010 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:14:39.767129 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:14:39.778326 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:14:39.779507 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:14:39.793951 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:14:39.800171 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:14:39.808313 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:14:39.808453 systemd-networkd[889]: lo: Link UP Aug 13 07:14:39.808456 systemd-networkd[889]: lo: Gained carrier Aug 13 07:14:39.816453 systemd-networkd[889]: Enumeration completed Aug 13 07:14:39.817320 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:14:39.817496 systemd-networkd[889]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:14:39.817500 systemd-networkd[889]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:14:39.820569 systemd[1]: Reached target network.target - Network. Aug 13 07:14:39.838435 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:14:39.885181 kernel: mlx5_core 53be:00:02.0 enP21438s1: Link up Aug 13 07:14:39.916208 kernel: hv_netvsc 7c1e522d-accf-7c1e-522d-accf7c1e522d eth0: Data path switched to VF: enP21438s1 Aug 13 07:14:39.916982 systemd-networkd[889]: enP21438s1: Link UP Aug 13 07:14:39.919660 systemd-networkd[889]: eth0: Link UP Aug 13 07:14:39.921658 systemd-networkd[889]: eth0: Gained carrier Aug 13 07:14:39.921676 systemd-networkd[889]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:14:39.934384 systemd-networkd[889]: enP21438s1: Gained carrier Aug 13 07:14:39.951194 systemd-networkd[889]: eth0: DHCPv4 address 10.200.4.34/24, gateway 10.200.4.1 acquired from 168.63.129.16 Aug 13 07:14:40.920296 ignition[900]: Ignition 2.19.0 Aug 13 07:14:40.920308 ignition[900]: Stage: fetch-offline Aug 13 07:14:40.920351 ignition[900]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:14:40.920362 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:14:40.920475 ignition[900]: parsed url from cmdline: "" Aug 13 07:14:40.920480 ignition[900]: no config URL provided Aug 13 07:14:40.920487 ignition[900]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:14:40.920497 ignition[900]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:14:40.920504 ignition[900]: failed to fetch config: resource requires networking Aug 13 07:14:40.922725 ignition[900]: Ignition finished successfully Aug 13 07:14:40.941280 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:14:40.950392 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 07:14:40.964903 ignition[908]: Ignition 2.19.0 Aug 13 07:14:40.964916 ignition[908]: Stage: fetch Aug 13 07:14:40.965185 ignition[908]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:14:40.965201 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:14:40.965333 ignition[908]: parsed url from cmdline: "" Aug 13 07:14:40.965336 ignition[908]: no config URL provided Aug 13 07:14:40.965342 ignition[908]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:14:40.965350 ignition[908]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:14:40.965379 ignition[908]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 13 07:14:41.068378 ignition[908]: GET result: OK Aug 13 07:14:41.068880 ignition[908]: config has been read from IMDS userdata Aug 13 07:14:41.068986 ignition[908]: parsing config with SHA512: bc5ea631a6497609ef516ef0888c82d16cfad68a2e3172432f4e4174ab94e271b7cd8a9b95ecc23d5f43b270debb5ad228c4bf5e0a7603771abf9549f62f1064 Aug 13 07:14:41.078123 unknown[908]: fetched base config from "system" Aug 13 07:14:41.078139 unknown[908]: fetched base config from "system" Aug 13 07:14:41.079160 ignition[908]: fetch: fetch complete Aug 13 07:14:41.078159 unknown[908]: fetched user config from "azure" Aug 13 07:14:41.079175 ignition[908]: fetch: fetch passed Aug 13 07:14:41.080917 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 07:14:41.079237 ignition[908]: Ignition finished successfully Aug 13 07:14:41.091365 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:14:41.108942 ignition[914]: Ignition 2.19.0 Aug 13 07:14:41.108953 ignition[914]: Stage: kargs Aug 13 07:14:41.109212 ignition[914]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:14:41.109227 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:14:41.110136 ignition[914]: kargs: kargs passed Aug 13 07:14:41.110210 ignition[914]: Ignition finished successfully Aug 13 07:14:41.119586 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:14:41.128439 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:14:41.146955 ignition[920]: Ignition 2.19.0 Aug 13 07:14:41.146967 ignition[920]: Stage: disks Aug 13 07:14:41.149306 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:14:41.147236 ignition[920]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:14:41.153806 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:14:41.147250 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:14:41.159019 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:14:41.148205 ignition[920]: disks: disks passed Aug 13 07:14:41.162140 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:14:41.148255 ignition[920]: Ignition finished successfully Aug 13 07:14:41.184026 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:14:41.188249 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:14:41.204457 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:14:41.274493 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Aug 13 07:14:41.282048 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:14:41.295294 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:14:41.398167 kernel: EXT4-fs (sda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:14:41.398689 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:14:41.401719 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:14:41.452306 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:14:41.466173 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (940) Aug 13 07:14:41.472814 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:14:41.472903 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:14:41.476180 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:14:41.476323 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:14:41.482995 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 07:14:41.488586 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:14:41.491671 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:14:41.491722 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:14:41.505257 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:14:41.507590 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:14:41.518317 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:14:41.837375 systemd-networkd[889]: eth0: Gained IPv6LL Aug 13 07:14:42.231658 coreos-metadata[955]: Aug 13 07:14:42.231 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 07:14:42.235849 coreos-metadata[955]: Aug 13 07:14:42.235 INFO Fetch successful Aug 13 07:14:42.238219 coreos-metadata[955]: Aug 13 07:14:42.235 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 13 07:14:42.253514 coreos-metadata[955]: Aug 13 07:14:42.253 INFO Fetch successful Aug 13 07:14:42.273705 coreos-metadata[955]: Aug 13 07:14:42.273 INFO wrote hostname ci-4081.3.5-a-0c3b310332 to /sysroot/etc/hostname Aug 13 07:14:42.280882 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:14:42.468433 initrd-setup-root[969]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:14:42.521058 initrd-setup-root[976]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:14:42.540982 initrd-setup-root[983]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:14:42.575384 initrd-setup-root[990]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:14:43.563844 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:14:43.574385 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:14:43.580391 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:14:43.591792 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:14:43.598379 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:14:43.627631 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:14:43.634350 ignition[1058]: INFO : Ignition 2.19.0 Aug 13 07:14:43.634350 ignition[1058]: INFO : Stage: mount Aug 13 07:14:43.634350 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:14:43.634350 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:14:43.634350 ignition[1058]: INFO : mount: mount passed Aug 13 07:14:43.634350 ignition[1058]: INFO : Ignition finished successfully Aug 13 07:14:43.649080 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:14:43.658247 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:14:43.665300 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:14:43.686184 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1069) Aug 13 07:14:43.693047 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:14:43.693133 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:14:43.695607 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:14:43.702167 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:14:43.704160 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:14:43.729046 ignition[1086]: INFO : Ignition 2.19.0 Aug 13 07:14:43.729046 ignition[1086]: INFO : Stage: files Aug 13 07:14:43.734322 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:14:43.734322 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:14:43.734322 ignition[1086]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:14:43.775554 ignition[1086]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:14:43.775554 ignition[1086]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:14:43.886078 ignition[1086]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:14:43.891737 ignition[1086]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:14:43.899528 unknown[1086]: wrote ssh authorized keys file for user: core Aug 13 07:14:43.902958 ignition[1086]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:14:43.930681 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 07:14:43.938063 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 07:14:43.938063 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:14:43.938063 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 07:14:43.973406 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 07:14:44.067475 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:14:44.072821 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:14:44.072821 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:14:44.072821 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:14:44.072821 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:14:44.072821 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:14:44.072821 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:14:44.072821 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:14:44.072821 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:14:44.107316 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:14:44.111715 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:14:44.111715 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:14:44.111715 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:14:44.111715 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:14:44.111715 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 07:14:44.645773 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 07:14:45.657506 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:14:45.657506 ignition[1086]: INFO : files: op(c): [started] processing unit "containerd.service" Aug 13 07:14:45.687682 ignition[1086]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 07:14:45.693742 ignition[1086]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 07:14:45.693742 ignition[1086]: INFO : files: op(c): [finished] processing unit "containerd.service" Aug 13 07:14:45.693742 ignition[1086]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Aug 13 07:14:45.693742 ignition[1086]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:14:45.693742 ignition[1086]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:14:45.693742 ignition[1086]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Aug 13 07:14:45.693742 ignition[1086]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:14:45.693742 ignition[1086]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:14:45.693742 ignition[1086]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:14:45.693742 ignition[1086]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:14:45.693742 ignition[1086]: INFO : files: files passed Aug 13 07:14:45.693742 ignition[1086]: INFO : Ignition finished successfully Aug 13 07:14:45.690986 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:14:45.743446 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:14:45.752419 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:14:45.758530 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:14:45.758662 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:14:45.785677 initrd-setup-root-after-ignition[1114]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:14:45.785677 initrd-setup-root-after-ignition[1114]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:14:45.793720 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:14:45.799341 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:14:45.803131 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:14:45.819450 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:14:45.857612 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:14:45.857740 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:14:45.864532 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:14:45.873230 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:14:45.878622 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:14:45.890487 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:14:45.908028 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:14:45.920569 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:14:45.935266 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:14:45.941247 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:14:45.941460 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:14:45.942008 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:14:45.942137 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:14:45.942831 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:14:45.943366 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:14:45.943671 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:14:45.944100 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:14:45.944531 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:14:45.944947 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:14:45.945436 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:14:45.945853 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:14:45.946273 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:14:45.946674 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:14:45.947064 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:14:45.947231 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:14:45.948133 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:14:45.948580 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:14:45.948940 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:14:45.983140 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:14:45.989221 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:14:45.989406 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:14:46.041685 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:14:46.041869 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:14:46.051282 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:14:46.051456 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:14:46.058908 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 07:14:46.059110 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:14:46.079614 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:14:46.083242 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:14:46.083479 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:14:46.091396 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:14:46.098458 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:14:46.098707 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:14:46.108253 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:14:46.112743 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:14:46.120859 ignition[1138]: INFO : Ignition 2.19.0 Aug 13 07:14:46.120859 ignition[1138]: INFO : Stage: umount Aug 13 07:14:46.120859 ignition[1138]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:14:46.120859 ignition[1138]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:14:46.132911 ignition[1138]: INFO : umount: umount passed Aug 13 07:14:46.132911 ignition[1138]: INFO : Ignition finished successfully Aug 13 07:14:46.123204 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:14:46.123513 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:14:46.135159 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:14:46.135272 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:14:46.142102 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:14:46.142447 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:14:46.145177 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:14:46.145239 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:14:46.149970 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 07:14:46.150067 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 07:14:46.171805 systemd[1]: Stopped target network.target - Network. Aug 13 07:14:46.176275 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:14:46.176375 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:14:46.182137 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:14:46.187449 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:14:46.192955 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:14:46.199630 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:14:46.201966 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:14:46.204545 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:14:46.204598 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:14:46.211358 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:14:46.213471 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:14:46.222261 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:14:46.222351 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:14:46.227221 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:14:46.227289 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:14:46.232371 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:14:46.237862 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:14:46.244486 systemd-networkd[889]: eth0: DHCPv6 lease lost Aug 13 07:14:46.251431 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:14:46.252220 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:14:46.252350 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:14:46.262306 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:14:46.264561 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:14:46.273552 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:14:46.273613 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:14:46.289345 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:14:46.291869 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:14:46.291947 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:14:46.297341 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:14:46.297406 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:14:46.300459 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:14:46.300517 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:14:46.303310 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:14:46.303365 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:14:46.309002 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:14:46.344478 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:14:46.344645 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:14:46.350894 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:14:46.351009 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:14:46.356418 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:14:46.356463 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:14:46.361770 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:14:46.361832 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:14:46.367211 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:14:46.367265 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:14:46.386176 kernel: hv_netvsc 7c1e522d-accf-7c1e-522d-accf7c1e522d eth0: Data path switched from VF: enP21438s1 Aug 13 07:14:46.387614 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:14:46.387703 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:14:46.403378 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:14:46.409319 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:14:46.409413 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:14:46.415251 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:14:46.418664 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:14:46.426855 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:14:46.426999 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:14:46.431949 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:14:46.432039 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:14:46.792258 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:14:46.792397 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:14:46.795882 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:14:46.800397 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:14:46.800480 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:14:46.815487 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:14:47.301242 systemd[1]: Switching root. Aug 13 07:14:47.333115 systemd-journald[176]: Journal stopped Aug 13 07:14:57.678737 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Aug 13 07:14:57.678786 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:14:57.678803 kernel: SELinux: policy capability open_perms=1 Aug 13 07:14:57.678816 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:14:57.678828 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:14:57.678840 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:14:57.678855 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:14:57.678874 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:14:57.678888 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:14:57.678904 kernel: audit: type=1403 audit(1755069290.096:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:14:57.678921 systemd[1]: Successfully loaded SELinux policy in 240.483ms. Aug 13 07:14:57.678939 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.612ms. Aug 13 07:14:57.678958 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:14:57.678975 systemd[1]: Detected virtualization microsoft. Aug 13 07:14:57.678998 systemd[1]: Detected architecture x86-64. Aug 13 07:14:57.679015 systemd[1]: Detected first boot. Aug 13 07:14:57.679033 systemd[1]: Hostname set to . Aug 13 07:14:57.679051 systemd[1]: Initializing machine ID from random generator. Aug 13 07:14:57.679068 zram_generator::config[1197]: No configuration found. Aug 13 07:14:57.679090 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:14:57.679108 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:14:57.679127 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 07:14:57.679170 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:14:57.679186 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:14:57.679196 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:14:57.679206 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:14:57.679219 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:14:57.679229 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:14:57.679241 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:14:57.679251 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:14:57.679261 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:14:57.679271 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:14:57.679280 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:14:57.679292 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:14:57.679302 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:14:57.679316 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:14:57.679330 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:14:57.679347 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:14:57.679363 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:14:57.679381 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:14:57.679404 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:14:57.679424 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:14:57.679443 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:14:57.679461 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:14:57.679477 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:14:57.679492 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:14:57.679508 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:14:57.679525 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:14:57.679542 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:14:57.679563 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:14:57.679578 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:14:57.679594 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:14:57.679610 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:14:57.679626 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:14:57.679645 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:14:57.679662 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:14:57.679679 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:14:57.679695 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:14:57.679712 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:14:57.679730 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:14:57.679746 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:14:57.679762 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:14:57.679783 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:14:57.679801 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:14:57.679819 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:14:57.679837 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:14:57.679854 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:14:57.679873 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:14:57.679890 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 07:14:57.679907 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Aug 13 07:14:57.679927 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:14:57.679943 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:14:57.679959 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:14:57.679974 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:14:57.679990 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:14:57.680006 kernel: fuse: init (API version 7.39) Aug 13 07:14:57.680022 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:14:57.680074 systemd-journald[1317]: Collecting audit messages is disabled. Aug 13 07:14:57.680111 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:14:57.680128 systemd-journald[1317]: Journal started Aug 13 07:14:57.680272 systemd-journald[1317]: Runtime Journal (/run/log/journal/218b19b3800f4f2b85c04317a08c0478) is 8.0M, max 158.8M, 150.8M free. Aug 13 07:14:57.687890 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:14:57.699920 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:14:57.708874 kernel: loop: module loaded Aug 13 07:14:57.706330 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:14:57.709545 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:14:57.712783 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:14:57.716309 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:14:57.719191 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:14:57.722807 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:14:57.726376 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:14:57.726616 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:14:57.729991 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:14:57.730257 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:14:57.733744 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:14:57.733996 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:14:57.737576 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:14:57.737829 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:14:57.741223 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:14:57.741469 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:14:57.744857 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:14:57.749782 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:14:57.759042 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:14:57.779001 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:14:57.789324 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:14:57.800302 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:14:57.804506 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:14:57.854163 kernel: ACPI: bus type drm_connector registered Aug 13 07:14:57.855342 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:14:57.862961 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:14:57.869288 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:14:57.874329 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:14:57.877732 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:14:57.879022 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:14:57.885386 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:14:57.893399 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:14:57.896404 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:14:57.899918 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:14:57.903482 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:14:57.906873 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:14:57.919342 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:14:57.935840 udevadm[1363]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 07:14:57.940311 systemd-journald[1317]: Time spent on flushing to /var/log/journal/218b19b3800f4f2b85c04317a08c0478 is 18.130ms for 948 entries. Aug 13 07:14:57.940311 systemd-journald[1317]: System Journal (/var/log/journal/218b19b3800f4f2b85c04317a08c0478) is 8.0M, max 2.6G, 2.6G free. Aug 13 07:14:57.992849 systemd-journald[1317]: Received client request to flush runtime journal. Aug 13 07:14:57.955521 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:14:57.961705 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:14:57.995011 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:14:58.072703 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:14:58.194233 systemd-tmpfiles[1355]: ACLs are not supported, ignoring. Aug 13 07:14:58.194265 systemd-tmpfiles[1355]: ACLs are not supported, ignoring. Aug 13 07:14:58.201671 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:14:58.218390 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:14:58.888091 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:14:58.906445 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:14:58.923268 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Aug 13 07:14:58.923294 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Aug 13 07:14:58.930450 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:15:00.148765 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:15:00.159414 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:15:00.186815 systemd-udevd[1384]: Using default interface naming scheme 'v255'. Aug 13 07:15:00.856203 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:15:00.868418 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:15:00.946083 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Aug 13 07:15:00.967953 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:15:01.095989 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:15:01.105012 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:15:01.113183 kernel: hv_vmbus: registering driver hv_balloon Aug 13 07:15:01.125168 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Aug 13 07:15:01.128166 kernel: hv_vmbus: registering driver hyperv_fb Aug 13 07:15:01.137873 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Aug 13 07:15:01.137991 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Aug 13 07:15:01.143736 kernel: Console: switching to colour dummy device 80x25 Aug 13 07:15:01.147792 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 07:15:01.189540 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:15:01.225096 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:15:01.226234 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:15:01.242697 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:15:01.365446 systemd-networkd[1388]: lo: Link UP Aug 13 07:15:01.366403 systemd-networkd[1388]: lo: Gained carrier Aug 13 07:15:01.372411 systemd-networkd[1388]: Enumeration completed Aug 13 07:15:01.373437 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:15:01.373906 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:15:01.377044 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:15:01.392715 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1405) Aug 13 07:15:01.391370 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:15:01.502189 kernel: mlx5_core 53be:00:02.0 enP21438s1: Link up Aug 13 07:15:01.524169 kernel: hv_netvsc 7c1e522d-accf-7c1e-522d-accf7c1e522d eth0: Data path switched to VF: enP21438s1 Aug 13 07:15:01.533492 systemd-networkd[1388]: enP21438s1: Link UP Aug 13 07:15:01.533775 systemd-networkd[1388]: eth0: Link UP Aug 13 07:15:01.533866 systemd-networkd[1388]: eth0: Gained carrier Aug 13 07:15:01.533944 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:15:01.542220 systemd-networkd[1388]: enP21438s1: Gained carrier Aug 13 07:15:01.549179 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Aug 13 07:15:01.576185 systemd-networkd[1388]: eth0: DHCPv4 address 10.200.4.34/24, gateway 10.200.4.1 acquired from 168.63.129.16 Aug 13 07:15:01.584179 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 13 07:15:01.693102 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:15:01.705369 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:15:01.862553 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:15:01.906343 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:15:01.979998 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:15:01.991384 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:15:01.999263 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:15:02.026486 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:15:02.026893 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:15:02.028199 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:15:02.028339 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:15:02.028574 systemd[1]: Reached target machines.target - Containers. Aug 13 07:15:02.030295 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:15:02.042597 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:15:02.045975 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:15:02.049016 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:15:02.051361 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:15:02.056324 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:15:02.063329 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:15:02.096781 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:15:02.149576 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:15:02.151000 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:15:02.158172 kernel: loop0: detected capacity change from 0 to 31056 Aug 13 07:15:02.178635 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:15:02.813337 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:15:02.994176 kernel: loop1: detected capacity change from 0 to 142488 Aug 13 07:15:03.155782 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:15:03.469388 systemd-networkd[1388]: eth0: Gained IPv6LL Aug 13 07:15:03.476731 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:15:03.762182 kernel: loop2: detected capacity change from 0 to 140768 Aug 13 07:15:04.441172 kernel: loop3: detected capacity change from 0 to 221472 Aug 13 07:15:04.483171 kernel: loop4: detected capacity change from 0 to 31056 Aug 13 07:15:04.497185 kernel: loop5: detected capacity change from 0 to 142488 Aug 13 07:15:04.516168 kernel: loop6: detected capacity change from 0 to 140768 Aug 13 07:15:04.554169 kernel: loop7: detected capacity change from 0 to 221472 Aug 13 07:15:04.566640 (sd-merge)[1505]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Aug 13 07:15:04.567325 (sd-merge)[1505]: Merged extensions into '/usr'. Aug 13 07:15:04.571316 systemd[1]: Reloading requested from client PID 1485 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:15:04.571338 systemd[1]: Reloading... Aug 13 07:15:04.637177 zram_generator::config[1529]: No configuration found. Aug 13 07:15:04.814552 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:15:04.904634 systemd[1]: Reloading finished in 332 ms. Aug 13 07:15:04.925052 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:15:04.947364 systemd[1]: Starting ensure-sysext.service... Aug 13 07:15:04.953492 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:15:04.962332 systemd[1]: Reloading requested from client PID 1596 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:15:04.962356 systemd[1]: Reloading... Aug 13 07:15:04.988643 systemd-tmpfiles[1597]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:15:04.989854 systemd-tmpfiles[1597]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:15:04.990993 systemd-tmpfiles[1597]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:15:04.991338 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Aug 13 07:15:04.991426 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Aug 13 07:15:05.040248 zram_generator::config[1622]: No configuration found. Aug 13 07:15:05.086352 systemd-tmpfiles[1597]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:15:05.086369 systemd-tmpfiles[1597]: Skipping /boot Aug 13 07:15:05.103467 systemd-tmpfiles[1597]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:15:05.103490 systemd-tmpfiles[1597]: Skipping /boot Aug 13 07:15:05.203307 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:15:05.280647 systemd[1]: Reloading finished in 317 ms. Aug 13 07:15:05.304987 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:15:05.319461 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:15:05.373550 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:15:05.379013 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:15:05.389600 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:15:05.402290 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:15:05.407855 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:15:05.408286 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:15:05.416550 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:15:05.423471 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:15:05.439470 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:15:05.442947 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:15:05.446411 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:15:05.448085 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:15:05.449606 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:15:05.455943 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:15:05.457022 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:15:05.462060 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:15:05.462320 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:15:05.479812 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:15:05.480405 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:15:05.487796 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:15:05.501401 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:15:05.515715 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:15:05.521242 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:15:05.522338 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:15:05.526036 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:15:05.533506 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:15:05.539446 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:15:05.539850 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:15:05.544441 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:15:05.544655 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:15:05.554883 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:15:05.565396 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:15:05.565716 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:15:05.570432 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:15:05.579765 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:15:05.585694 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:15:05.604513 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:15:05.608363 systemd-resolved[1698]: Positive Trust Anchors: Aug 13 07:15:05.608374 systemd-resolved[1698]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:15:05.608421 systemd-resolved[1698]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:15:05.608703 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:15:05.611197 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:15:05.616040 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:15:05.618763 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:15:05.618984 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:15:05.625235 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:15:05.625618 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:15:05.629543 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:15:05.629916 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:15:05.634345 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:15:05.634586 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:15:05.640481 systemd[1]: Finished ensure-sysext.service. Aug 13 07:15:05.651670 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:15:05.651755 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:15:05.680666 systemd-resolved[1698]: Using system hostname 'ci-4081.3.5-a-0c3b310332'. Aug 13 07:15:05.682624 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:15:05.686748 systemd[1]: Reached target network.target - Network. Aug 13 07:15:05.689360 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:15:05.692822 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:15:05.731635 augenrules[1751]: No rules Aug 13 07:15:05.734002 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:15:05.897114 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:15:08.598897 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:15:08.602939 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:15:12.649450 ldconfig[1482]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:15:12.702003 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:15:12.717373 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:15:12.748441 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:15:12.752406 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:15:12.755636 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:15:12.758905 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:15:12.762604 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:15:12.765475 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:15:12.768621 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:15:12.771802 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:15:12.771850 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:15:12.774373 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:15:12.809133 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:15:12.814673 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:15:12.833264 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:15:12.836649 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:15:12.839472 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:15:12.842076 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:15:12.844686 systemd[1]: System is tainted: cgroupsv1 Aug 13 07:15:12.844750 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:15:12.844782 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:15:12.850290 systemd[1]: Starting chronyd.service - NTP client/server... Aug 13 07:15:12.855284 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:15:12.863351 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 07:15:12.877417 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:15:12.886545 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:15:12.901833 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:15:12.904956 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:15:12.905013 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Aug 13 07:15:12.910767 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Aug 13 07:15:12.917696 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Aug 13 07:15:12.923793 (chronyd)[1769]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Aug 13 07:15:12.926338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:15:12.933595 jq[1773]: false Aug 13 07:15:12.941456 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:15:12.947694 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:15:12.960259 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:15:12.963106 KVP[1778]: KVP starting; pid is:1778 Aug 13 07:15:12.964135 chronyd[1789]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Aug 13 07:15:12.970113 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:15:12.982989 KVP[1778]: KVP LIC Version: 3.1 Aug 13 07:15:12.985491 kernel: hv_utils: KVP IC version 4.0 Aug 13 07:15:12.985374 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:15:12.992373 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:15:13.000139 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:15:13.011805 extend-filesystems[1777]: Found loop4 Aug 13 07:15:13.011805 extend-filesystems[1777]: Found loop5 Aug 13 07:15:13.011805 extend-filesystems[1777]: Found loop6 Aug 13 07:15:13.011805 extend-filesystems[1777]: Found loop7 Aug 13 07:15:13.011805 extend-filesystems[1777]: Found sda Aug 13 07:15:13.011805 extend-filesystems[1777]: Found sda1 Aug 13 07:15:13.011805 extend-filesystems[1777]: Found sda2 Aug 13 07:15:13.011805 extend-filesystems[1777]: Found sda3 Aug 13 07:15:13.011805 extend-filesystems[1777]: Found usr Aug 13 07:15:13.011805 extend-filesystems[1777]: Found sda4 Aug 13 07:15:13.011805 extend-filesystems[1777]: Found sda6 Aug 13 07:15:13.011805 extend-filesystems[1777]: Found sda7 Aug 13 07:15:13.011805 extend-filesystems[1777]: Found sda9 Aug 13 07:15:13.011805 extend-filesystems[1777]: Checking size of /dev/sda9 Aug 13 07:15:13.036600 chronyd[1789]: Timezone right/UTC failed leap second check, ignoring Aug 13 07:15:13.014803 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:15:13.036810 chronyd[1789]: Loaded seccomp filter (level 2) Aug 13 07:15:13.031456 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:15:13.058688 jq[1803]: true Aug 13 07:15:13.065920 systemd[1]: Started chronyd.service - NTP client/server. Aug 13 07:15:13.070615 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:15:13.070937 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:15:13.074531 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:15:13.074871 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:15:13.081269 extend-filesystems[1777]: Old size kept for /dev/sda9 Aug 13 07:15:13.081269 extend-filesystems[1777]: Found sr0 Aug 13 07:15:13.096213 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:15:13.096528 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:15:13.112678 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:15:13.112999 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:15:13.148874 (ntainerd)[1819]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:15:13.183670 jq[1817]: true Aug 13 07:15:13.204875 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:15:13.217547 update_engine[1799]: I20250813 07:15:13.217438 1799 main.cc:92] Flatcar Update Engine starting Aug 13 07:15:13.239021 systemd-logind[1793]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:15:13.245572 systemd-logind[1793]: New seat seat0. Aug 13 07:15:13.248331 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:15:13.283423 tar[1815]: linux-amd64/helm Aug 13 07:15:13.328023 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1848) Aug 13 07:15:13.341308 sshd_keygen[1814]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:15:13.362310 dbus-daemon[1772]: [system] SELinux support is enabled Aug 13 07:15:13.362587 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:15:13.388601 dbus-daemon[1772]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 07:15:13.395964 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:15:13.396034 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:15:13.400544 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:15:13.400590 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:15:13.405804 update_engine[1799]: I20250813 07:15:13.405737 1799 update_check_scheduler.cc:74] Next update check in 6m36s Aug 13 07:15:13.417137 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:15:13.436284 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:15:13.436960 bash[1886]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:15:13.445896 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:15:13.467601 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:15:13.473780 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 07:15:13.491413 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Aug 13 07:15:13.503620 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:15:13.505448 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:15:13.545586 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:15:13.545967 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:15:13.566275 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:15:13.590528 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Aug 13 07:15:13.608776 coreos-metadata[1771]: Aug 13 07:15:13.608 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 07:15:13.614502 coreos-metadata[1771]: Aug 13 07:15:13.614 INFO Fetch successful Aug 13 07:15:13.614881 coreos-metadata[1771]: Aug 13 07:15:13.614 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Aug 13 07:15:13.629046 coreos-metadata[1771]: Aug 13 07:15:13.621 INFO Fetch successful Aug 13 07:15:13.633967 coreos-metadata[1771]: Aug 13 07:15:13.632 INFO Fetching http://168.63.129.16/machine/4cfdab93-7528-4f2f-8d75-a623b6658cf7/fe91b5c5%2Dc2b9%2D4fc0%2D8844%2Defead555c30c.%5Fci%2D4081.3.5%2Da%2D0c3b310332?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Aug 13 07:15:13.640360 coreos-metadata[1771]: Aug 13 07:15:13.638 INFO Fetch successful Aug 13 07:15:13.640360 coreos-metadata[1771]: Aug 13 07:15:13.640 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Aug 13 07:15:13.657252 coreos-metadata[1771]: Aug 13 07:15:13.655 INFO Fetch successful Aug 13 07:15:13.657951 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:15:13.672465 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:15:13.694053 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:15:13.697677 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:15:13.716368 locksmithd[1910]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:15:13.755478 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 07:15:13.760204 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:15:14.043627 tar[1815]: linux-amd64/LICENSE Aug 13 07:15:14.043627 tar[1815]: linux-amd64/README.md Aug 13 07:15:14.057843 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:15:14.463004 containerd[1819]: time="2025-08-13T07:15:14.462605800Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:15:14.505731 containerd[1819]: time="2025-08-13T07:15:14.505664400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:15:14.507875 containerd[1819]: time="2025-08-13T07:15:14.507492700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:15:14.507875 containerd[1819]: time="2025-08-13T07:15:14.507542500Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:15:14.507875 containerd[1819]: time="2025-08-13T07:15:14.507565300Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:15:14.507875 containerd[1819]: time="2025-08-13T07:15:14.507742600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:15:14.507875 containerd[1819]: time="2025-08-13T07:15:14.507762500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:15:14.507875 containerd[1819]: time="2025-08-13T07:15:14.507830500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:15:14.507875 containerd[1819]: time="2025-08-13T07:15:14.507846000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:15:14.508213 containerd[1819]: time="2025-08-13T07:15:14.508183800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:15:14.508254 containerd[1819]: time="2025-08-13T07:15:14.508213600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:15:14.508254 containerd[1819]: time="2025-08-13T07:15:14.508233400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:15:14.508254 containerd[1819]: time="2025-08-13T07:15:14.508248600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:15:14.508381 containerd[1819]: time="2025-08-13T07:15:14.508356200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:15:14.508616 containerd[1819]: time="2025-08-13T07:15:14.508585100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:15:14.508840 containerd[1819]: time="2025-08-13T07:15:14.508807400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:15:14.508840 containerd[1819]: time="2025-08-13T07:15:14.508835400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:15:14.508966 containerd[1819]: time="2025-08-13T07:15:14.508944200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:15:14.509025 containerd[1819]: time="2025-08-13T07:15:14.509007600Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:15:14.523042 containerd[1819]: time="2025-08-13T07:15:14.521969200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:15:14.523042 containerd[1819]: time="2025-08-13T07:15:14.522062300Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:15:14.523042 containerd[1819]: time="2025-08-13T07:15:14.522086700Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:15:14.523042 containerd[1819]: time="2025-08-13T07:15:14.522122100Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:15:14.523042 containerd[1819]: time="2025-08-13T07:15:14.522174200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:15:14.523042 containerd[1819]: time="2025-08-13T07:15:14.522372800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:15:14.523042 containerd[1819]: time="2025-08-13T07:15:14.522849200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:15:14.523042 containerd[1819]: time="2025-08-13T07:15:14.523000900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:15:14.523042 containerd[1819]: time="2025-08-13T07:15:14.523024800Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:15:14.523042 containerd[1819]: time="2025-08-13T07:15:14.523043300Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:15:14.523042 containerd[1819]: time="2025-08-13T07:15:14.523061000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:15:14.523531 containerd[1819]: time="2025-08-13T07:15:14.523080600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:15:14.523531 containerd[1819]: time="2025-08-13T07:15:14.523095900Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:15:14.523531 containerd[1819]: time="2025-08-13T07:15:14.523113800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:15:14.523531 containerd[1819]: time="2025-08-13T07:15:14.523132800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:15:14.523531 containerd[1819]: time="2025-08-13T07:15:14.523164900Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:15:14.523531 containerd[1819]: time="2025-08-13T07:15:14.523181000Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:15:14.523531 containerd[1819]: time="2025-08-13T07:15:14.523199100Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:15:14.523531 containerd[1819]: time="2025-08-13T07:15:14.523225900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:15:14.523531 containerd[1819]: time="2025-08-13T07:15:14.523245900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:15:14.523531 containerd[1819]: time="2025-08-13T07:15:14.523263200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:15:14.523531 containerd[1819]: time="2025-08-13T07:15:14.523286700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:15:14.523531 containerd[1819]: time="2025-08-13T07:15:14.523312400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:15:14.523531 containerd[1819]: time="2025-08-13T07:15:14.523334700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:15:14.523531 containerd[1819]: time="2025-08-13T07:15:14.523352400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:15:14.524066 containerd[1819]: time="2025-08-13T07:15:14.523370900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:15:14.524066 containerd[1819]: time="2025-08-13T07:15:14.523389300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:15:14.524066 containerd[1819]: time="2025-08-13T07:15:14.523411200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:15:14.524066 containerd[1819]: time="2025-08-13T07:15:14.523429400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:15:14.524066 containerd[1819]: time="2025-08-13T07:15:14.523447100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:15:14.524066 containerd[1819]: time="2025-08-13T07:15:14.523465800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:15:14.524066 containerd[1819]: time="2025-08-13T07:15:14.523489900Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:15:14.524066 containerd[1819]: time="2025-08-13T07:15:14.523520200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:15:14.524066 containerd[1819]: time="2025-08-13T07:15:14.523535400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:15:14.524066 containerd[1819]: time="2025-08-13T07:15:14.523551100Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:15:14.524066 containerd[1819]: time="2025-08-13T07:15:14.523605400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:15:14.524066 containerd[1819]: time="2025-08-13T07:15:14.523628400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:15:14.524066 containerd[1819]: time="2025-08-13T07:15:14.523645900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:15:14.524554 containerd[1819]: time="2025-08-13T07:15:14.523664600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:15:14.524554 containerd[1819]: time="2025-08-13T07:15:14.523679200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:15:14.524554 containerd[1819]: time="2025-08-13T07:15:14.523697600Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:15:14.524554 containerd[1819]: time="2025-08-13T07:15:14.523712000Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:15:14.524554 containerd[1819]: time="2025-08-13T07:15:14.523726500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:15:14.525596 containerd[1819]: time="2025-08-13T07:15:14.524104200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:15:14.525596 containerd[1819]: time="2025-08-13T07:15:14.524926400Z" level=info msg="Connect containerd service" Aug 13 07:15:14.525596 containerd[1819]: time="2025-08-13T07:15:14.524978800Z" level=info msg="using legacy CRI server" Aug 13 07:15:14.525596 containerd[1819]: time="2025-08-13T07:15:14.524989600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:15:14.525596 containerd[1819]: time="2025-08-13T07:15:14.525119100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:15:14.526004 containerd[1819]: time="2025-08-13T07:15:14.525966400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:15:14.526339 containerd[1819]: time="2025-08-13T07:15:14.526310200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:15:14.526401 containerd[1819]: time="2025-08-13T07:15:14.526369400Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:15:14.527471 containerd[1819]: time="2025-08-13T07:15:14.526438000Z" level=info msg="Start subscribing containerd event" Aug 13 07:15:14.527471 containerd[1819]: time="2025-08-13T07:15:14.526562800Z" level=info msg="Start recovering state" Aug 13 07:15:14.527471 containerd[1819]: time="2025-08-13T07:15:14.526635700Z" level=info msg="Start event monitor" Aug 13 07:15:14.527471 containerd[1819]: time="2025-08-13T07:15:14.526653500Z" level=info msg="Start snapshots syncer" Aug 13 07:15:14.527471 containerd[1819]: time="2025-08-13T07:15:14.526666300Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:15:14.527471 containerd[1819]: time="2025-08-13T07:15:14.526676600Z" level=info msg="Start streaming server" Aug 13 07:15:14.526882 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:15:14.527789 containerd[1819]: time="2025-08-13T07:15:14.527773600Z" level=info msg="containerd successfully booted in 0.066589s" Aug 13 07:15:14.840353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:15:14.844799 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:15:14.847934 systemd[1]: Startup finished in 992ms (firmware) + 32.850s (loader) + 15.696s (kernel) + 24.990s (userspace) = 1min 14.530s. Aug 13 07:15:14.852939 (kubelet)[1962]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:15:15.469526 kubelet[1962]: E0813 07:15:15.469406 1962 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:15:15.473246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:15:15.474202 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:15:15.507358 login[1928]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Aug 13 07:15:15.541654 login[1927]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 07:15:15.553004 systemd-logind[1793]: New session 1 of user core. Aug 13 07:15:15.554404 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:15:15.565234 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:15:15.595024 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:15:15.610208 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:15:15.633506 (systemd)[1978]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:15:16.000693 systemd[1978]: Queued start job for default target default.target. Aug 13 07:15:16.001624 systemd[1978]: Created slice app.slice - User Application Slice. Aug 13 07:15:16.001653 systemd[1978]: Reached target paths.target - Paths. Aug 13 07:15:16.001672 systemd[1978]: Reached target timers.target - Timers. Aug 13 07:15:16.008353 systemd[1978]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:15:16.018809 systemd[1978]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:15:16.020760 systemd[1978]: Reached target sockets.target - Sockets. Aug 13 07:15:16.020940 systemd[1978]: Reached target basic.target - Basic System. Aug 13 07:15:16.021078 systemd[1978]: Reached target default.target - Main User Target. Aug 13 07:15:16.021121 systemd[1978]: Startup finished in 370ms. Aug 13 07:15:16.021529 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:15:16.028796 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:15:16.126785 waagent[1924]: 2025-08-13T07:15:16.126215Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Aug 13 07:15:16.128164 waagent[1924]: 2025-08-13T07:15:16.127294Z INFO Daemon Daemon OS: flatcar 4081.3.5 Aug 13 07:15:16.128164 waagent[1924]: 2025-08-13T07:15:16.128112Z INFO Daemon Daemon Python: 3.11.9 Aug 13 07:15:16.129329 waagent[1924]: 2025-08-13T07:15:16.129280Z INFO Daemon Daemon Run daemon Aug 13 07:15:16.130023 waagent[1924]: 2025-08-13T07:15:16.129985Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.5' Aug 13 07:15:16.130728 waagent[1924]: 2025-08-13T07:15:16.130690Z INFO Daemon Daemon Using waagent for provisioning Aug 13 07:15:16.131833 waagent[1924]: 2025-08-13T07:15:16.131796Z INFO Daemon Daemon Activate resource disk Aug 13 07:15:16.132589 waagent[1924]: 2025-08-13T07:15:16.132552Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Aug 13 07:15:16.137337 waagent[1924]: 2025-08-13T07:15:16.137286Z INFO Daemon Daemon Found device: None Aug 13 07:15:16.138354 waagent[1924]: 2025-08-13T07:15:16.138314Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Aug 13 07:15:16.139167 waagent[1924]: 2025-08-13T07:15:16.139122Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Aug 13 07:15:16.142583 waagent[1924]: 2025-08-13T07:15:16.141657Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 07:15:16.143096 waagent[1924]: 2025-08-13T07:15:16.143059Z INFO Daemon Daemon Running default provisioning handler Aug 13 07:15:16.165365 waagent[1924]: 2025-08-13T07:15:16.165244Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Aug 13 07:15:16.172052 waagent[1924]: 2025-08-13T07:15:16.171973Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 13 07:15:16.176574 waagent[1924]: 2025-08-13T07:15:16.176493Z INFO Daemon Daemon cloud-init is enabled: False Aug 13 07:15:16.179330 waagent[1924]: 2025-08-13T07:15:16.179247Z INFO Daemon Daemon Copying ovf-env.xml Aug 13 07:15:16.336858 waagent[1924]: 2025-08-13T07:15:16.336387Z INFO Daemon Daemon Successfully mounted dvd Aug 13 07:15:16.351371 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Aug 13 07:15:16.353545 waagent[1924]: 2025-08-13T07:15:16.353273Z INFO Daemon Daemon Detect protocol endpoint Aug 13 07:15:16.356396 waagent[1924]: 2025-08-13T07:15:16.356260Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 07:15:16.369162 waagent[1924]: 2025-08-13T07:15:16.356500Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Aug 13 07:15:16.369162 waagent[1924]: 2025-08-13T07:15:16.356977Z INFO Daemon Daemon Test for route to 168.63.129.16 Aug 13 07:15:16.369162 waagent[1924]: 2025-08-13T07:15:16.358087Z INFO Daemon Daemon Route to 168.63.129.16 exists Aug 13 07:15:16.369162 waagent[1924]: 2025-08-13T07:15:16.358475Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Aug 13 07:15:16.383859 waagent[1924]: 2025-08-13T07:15:16.383792Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Aug 13 07:15:16.392588 waagent[1924]: 2025-08-13T07:15:16.384380Z INFO Daemon Daemon Wire protocol version:2012-11-30 Aug 13 07:15:16.392588 waagent[1924]: 2025-08-13T07:15:16.385096Z INFO Daemon Daemon Server preferred version:2015-04-05 Aug 13 07:15:16.509775 login[1928]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 07:15:16.515433 systemd-logind[1793]: New session 2 of user core. Aug 13 07:15:16.522529 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:15:16.537165 waagent[1924]: 2025-08-13T07:15:16.535479Z INFO Daemon Daemon Initializing goal state during protocol detection Aug 13 07:15:16.537165 waagent[1924]: 2025-08-13T07:15:16.536441Z INFO Daemon Daemon Forcing an update of the goal state. Aug 13 07:15:16.540660 waagent[1924]: 2025-08-13T07:15:16.540609Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 13 07:15:16.554907 waagent[1924]: 2025-08-13T07:15:16.554844Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Aug 13 07:15:16.570001 waagent[1924]: 2025-08-13T07:15:16.555711Z INFO Daemon Aug 13 07:15:16.570001 waagent[1924]: 2025-08-13T07:15:16.556949Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 9870f444-1b26-40e8-bc15-6548d0d9bda3 eTag: 16329645098484722056 source: Fabric] Aug 13 07:15:16.570001 waagent[1924]: 2025-08-13T07:15:16.558082Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Aug 13 07:15:16.570001 waagent[1924]: 2025-08-13T07:15:16.559159Z INFO Daemon Aug 13 07:15:16.570001 waagent[1924]: 2025-08-13T07:15:16.560094Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Aug 13 07:15:16.572826 waagent[1924]: 2025-08-13T07:15:16.572772Z INFO Daemon Daemon Downloading artifacts profile blob Aug 13 07:15:16.712976 waagent[1924]: 2025-08-13T07:15:16.712821Z INFO Daemon Downloaded certificate {'thumbprint': 'C703E3A8B2A3FC639FA1F8B476A42396DF815F47', 'hasPrivateKey': True} Aug 13 07:15:16.718487 waagent[1924]: 2025-08-13T07:15:16.718412Z INFO Daemon Fetch goal state completed Aug 13 07:15:16.764395 waagent[1924]: 2025-08-13T07:15:16.764314Z INFO Daemon Daemon Starting provisioning Aug 13 07:15:16.767617 waagent[1924]: 2025-08-13T07:15:16.767529Z INFO Daemon Daemon Handle ovf-env.xml. Aug 13 07:15:16.770412 waagent[1924]: 2025-08-13T07:15:16.770327Z INFO Daemon Daemon Set hostname [ci-4081.3.5-a-0c3b310332] Aug 13 07:15:16.807916 waagent[1924]: 2025-08-13T07:15:16.807820Z INFO Daemon Daemon Publish hostname [ci-4081.3.5-a-0c3b310332] Aug 13 07:15:16.814941 waagent[1924]: 2025-08-13T07:15:16.808522Z INFO Daemon Daemon Examine /proc/net/route for primary interface Aug 13 07:15:16.814941 waagent[1924]: 2025-08-13T07:15:16.809438Z INFO Daemon Daemon Primary interface is [eth0] Aug 13 07:15:16.867033 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:15:16.867044 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:15:16.867097 systemd-networkd[1388]: eth0: DHCP lease lost Aug 13 07:15:16.868513 waagent[1924]: 2025-08-13T07:15:16.868425Z INFO Daemon Daemon Create user account if not exists Aug 13 07:15:16.886195 waagent[1924]: 2025-08-13T07:15:16.868917Z INFO Daemon Daemon User core already exists, skip useradd Aug 13 07:15:16.886195 waagent[1924]: 2025-08-13T07:15:16.870015Z INFO Daemon Daemon Configure sudoer Aug 13 07:15:16.886195 waagent[1924]: 2025-08-13T07:15:16.871352Z INFO Daemon Daemon Configure sshd Aug 13 07:15:16.886195 waagent[1924]: 2025-08-13T07:15:16.872259Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Aug 13 07:15:16.886195 waagent[1924]: 2025-08-13T07:15:16.872617Z INFO Daemon Daemon Deploy ssh public key. Aug 13 07:15:16.886358 systemd-networkd[1388]: eth0: DHCPv6 lease lost Aug 13 07:15:16.916224 systemd-networkd[1388]: eth0: DHCPv4 address 10.200.4.34/24, gateway 10.200.4.1 acquired from 168.63.129.16 Aug 13 07:15:18.047546 waagent[1924]: 2025-08-13T07:15:18.047471Z INFO Daemon Daemon Provisioning complete Aug 13 07:15:18.061458 waagent[1924]: 2025-08-13T07:15:18.061388Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Aug 13 07:15:18.064609 waagent[1924]: 2025-08-13T07:15:18.064540Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Aug 13 07:15:18.069354 waagent[1924]: 2025-08-13T07:15:18.069170Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Aug 13 07:15:18.211995 waagent[2030]: 2025-08-13T07:15:18.211885Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Aug 13 07:15:18.212498 waagent[2030]: 2025-08-13T07:15:18.212061Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.5 Aug 13 07:15:18.212498 waagent[2030]: 2025-08-13T07:15:18.212173Z INFO ExtHandler ExtHandler Python: 3.11.9 Aug 13 07:15:18.275176 waagent[2030]: 2025-08-13T07:15:18.275068Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.5; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Aug 13 07:15:18.275436 waagent[2030]: 2025-08-13T07:15:18.275380Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 07:15:18.275529 waagent[2030]: 2025-08-13T07:15:18.275488Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 07:15:18.284011 waagent[2030]: 2025-08-13T07:15:18.283928Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 13 07:15:18.289915 waagent[2030]: 2025-08-13T07:15:18.289857Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Aug 13 07:15:18.290448 waagent[2030]: 2025-08-13T07:15:18.290391Z INFO ExtHandler Aug 13 07:15:18.290547 waagent[2030]: 2025-08-13T07:15:18.290489Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f963d941-2e7f-486b-a1f2-aa46eefad344 eTag: 16329645098484722056 source: Fabric] Aug 13 07:15:18.290847 waagent[2030]: 2025-08-13T07:15:18.290797Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Aug 13 07:15:18.291431 waagent[2030]: 2025-08-13T07:15:18.291377Z INFO ExtHandler Aug 13 07:15:18.291512 waagent[2030]: 2025-08-13T07:15:18.291460Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Aug 13 07:15:18.295356 waagent[2030]: 2025-08-13T07:15:18.295314Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Aug 13 07:15:18.365897 waagent[2030]: 2025-08-13T07:15:18.365797Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C703E3A8B2A3FC639FA1F8B476A42396DF815F47', 'hasPrivateKey': True} Aug 13 07:15:18.366496 waagent[2030]: 2025-08-13T07:15:18.366436Z INFO ExtHandler Fetch goal state completed Aug 13 07:15:18.380763 waagent[2030]: 2025-08-13T07:15:18.380684Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2030 Aug 13 07:15:18.380939 waagent[2030]: 2025-08-13T07:15:18.380886Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Aug 13 07:15:18.382650 waagent[2030]: 2025-08-13T07:15:18.382588Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.5', '', 'Flatcar Container Linux by Kinvolk'] Aug 13 07:15:18.383029 waagent[2030]: 2025-08-13T07:15:18.382976Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Aug 13 07:15:18.472347 waagent[2030]: 2025-08-13T07:15:18.472297Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 13 07:15:18.472605 waagent[2030]: 2025-08-13T07:15:18.472554Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 13 07:15:18.479443 waagent[2030]: 2025-08-13T07:15:18.479398Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 13 07:15:18.487295 systemd[1]: Reloading requested from client PID 2043 ('systemctl') (unit waagent.service)... Aug 13 07:15:18.487315 systemd[1]: Reloading... Aug 13 07:15:18.580183 zram_generator::config[2077]: No configuration found. Aug 13 07:15:18.710133 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:15:18.791545 systemd[1]: Reloading finished in 303 ms. Aug 13 07:15:18.819370 waagent[2030]: 2025-08-13T07:15:18.818007Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Aug 13 07:15:18.826587 systemd[1]: Reloading requested from client PID 2139 ('systemctl') (unit waagent.service)... Aug 13 07:15:18.826606 systemd[1]: Reloading... Aug 13 07:15:18.917177 zram_generator::config[2176]: No configuration found. Aug 13 07:15:19.054445 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:15:19.135037 systemd[1]: Reloading finished in 307 ms. Aug 13 07:15:19.160355 waagent[2030]: 2025-08-13T07:15:19.159305Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Aug 13 07:15:19.160355 waagent[2030]: 2025-08-13T07:15:19.159499Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Aug 13 07:15:19.879776 waagent[2030]: 2025-08-13T07:15:19.879668Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Aug 13 07:15:19.880614 waagent[2030]: 2025-08-13T07:15:19.880536Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Aug 13 07:15:19.881590 waagent[2030]: 2025-08-13T07:15:19.881519Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 13 07:15:19.882172 waagent[2030]: 2025-08-13T07:15:19.882071Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 13 07:15:19.882332 waagent[2030]: 2025-08-13T07:15:19.882272Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 07:15:19.882730 waagent[2030]: 2025-08-13T07:15:19.882667Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 13 07:15:19.882872 waagent[2030]: 2025-08-13T07:15:19.882818Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 13 07:15:19.883095 waagent[2030]: 2025-08-13T07:15:19.883039Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 07:15:19.883373 waagent[2030]: 2025-08-13T07:15:19.883327Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 07:15:19.883490 waagent[2030]: 2025-08-13T07:15:19.883435Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 07:15:19.884190 waagent[2030]: 2025-08-13T07:15:19.884084Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 13 07:15:19.884368 waagent[2030]: 2025-08-13T07:15:19.884312Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 13 07:15:19.884592 waagent[2030]: 2025-08-13T07:15:19.884534Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 13 07:15:19.884922 waagent[2030]: 2025-08-13T07:15:19.884871Z INFO EnvHandler ExtHandler Configure routes Aug 13 07:15:19.885346 waagent[2030]: 2025-08-13T07:15:19.885277Z INFO EnvHandler ExtHandler Gateway:None Aug 13 07:15:19.885515 waagent[2030]: 2025-08-13T07:15:19.885465Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 13 07:15:19.885515 waagent[2030]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 13 07:15:19.885515 waagent[2030]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Aug 13 07:15:19.885515 waagent[2030]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 13 07:15:19.885515 waagent[2030]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 13 07:15:19.885515 waagent[2030]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 07:15:19.885515 waagent[2030]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 07:15:19.886015 waagent[2030]: 2025-08-13T07:15:19.885968Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 13 07:15:19.886256 waagent[2030]: 2025-08-13T07:15:19.886135Z INFO EnvHandler ExtHandler Routes:None Aug 13 07:15:19.893967 waagent[2030]: 2025-08-13T07:15:19.893909Z INFO ExtHandler ExtHandler Aug 13 07:15:19.894104 waagent[2030]: 2025-08-13T07:15:19.894030Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 66b59537-0cf9-4e51-be2a-618aaf627c5e correlation 05e8bad3-a473-4332-9aff-47875e86545a created: 2025-08-13T07:13:49.630859Z] Aug 13 07:15:19.894478 waagent[2030]: 2025-08-13T07:15:19.894428Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Aug 13 07:15:19.895018 waagent[2030]: 2025-08-13T07:15:19.894974Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Aug 13 07:15:19.960224 waagent[2030]: 2025-08-13T07:15:19.960048Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F2BB0AC6-B12F-4448-ADD2-331A3E4CB6FD;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Aug 13 07:15:20.021755 waagent[2030]: 2025-08-13T07:15:20.021672Z INFO MonitorHandler ExtHandler Network interfaces: Aug 13 07:15:20.021755 waagent[2030]: Executing ['ip', '-a', '-o', 'link']: Aug 13 07:15:20.021755 waagent[2030]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 13 07:15:20.021755 waagent[2030]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2d:ac:cf brd ff:ff:ff:ff:ff:ff Aug 13 07:15:20.021755 waagent[2030]: 3: enP21438s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2d:ac:cf brd ff:ff:ff:ff:ff:ff\ altname enP21438p0s2 Aug 13 07:15:20.021755 waagent[2030]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 13 07:15:20.021755 waagent[2030]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 13 07:15:20.021755 waagent[2030]: 2: eth0 inet 10.200.4.34/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 13 07:15:20.021755 waagent[2030]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 13 07:15:20.021755 waagent[2030]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Aug 13 07:15:20.021755 waagent[2030]: 2: eth0 inet6 fe80::7e1e:52ff:fe2d:accf/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Aug 13 07:15:20.093550 waagent[2030]: 2025-08-13T07:15:20.093480Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Aug 13 07:15:20.093550 waagent[2030]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 07:15:20.093550 waagent[2030]: pkts bytes target prot opt in out source destination Aug 13 07:15:20.093550 waagent[2030]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 07:15:20.093550 waagent[2030]: pkts bytes target prot opt in out source destination Aug 13 07:15:20.093550 waagent[2030]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 07:15:20.093550 waagent[2030]: pkts bytes target prot opt in out source destination Aug 13 07:15:20.093550 waagent[2030]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 13 07:15:20.093550 waagent[2030]: 5 646 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 07:15:20.093550 waagent[2030]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 07:15:20.097085 waagent[2030]: 2025-08-13T07:15:20.097019Z INFO EnvHandler ExtHandler Current Firewall rules: Aug 13 07:15:20.097085 waagent[2030]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 07:15:20.097085 waagent[2030]: pkts bytes target prot opt in out source destination Aug 13 07:15:20.097085 waagent[2030]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 07:15:20.097085 waagent[2030]: pkts bytes target prot opt in out source destination Aug 13 07:15:20.097085 waagent[2030]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 07:15:20.097085 waagent[2030]: pkts bytes target prot opt in out source destination Aug 13 07:15:20.097085 waagent[2030]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 13 07:15:20.097085 waagent[2030]: 5 646 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 07:15:20.097085 waagent[2030]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 07:15:20.097589 waagent[2030]: 2025-08-13T07:15:20.097376Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Aug 13 07:15:25.566642 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:15:25.579458 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:15:25.702386 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:15:25.707446 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:15:26.357626 kubelet[2278]: E0813 07:15:26.357567 2278 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:15:26.361294 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:15:26.361631 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:15:36.566944 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 07:15:36.572408 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:15:36.828296 chronyd[1789]: Selected source PHC0 Aug 13 07:15:36.925807 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:15:36.934490 systemd[1]: Started sshd@0-10.200.4.34:22-10.200.16.10:55968.service - OpenSSH per-connection server daemon (10.200.16.10:55968). Aug 13 07:15:36.947345 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:15:36.951129 (kubelet)[2299]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:15:37.375756 kubelet[2299]: E0813 07:15:37.375638 2299 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:15:37.378464 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:15:37.378793 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:15:38.012570 sshd[2294]: Accepted publickey for core from 10.200.16.10 port 55968 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:15:38.014113 sshd[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:15:38.018473 systemd-logind[1793]: New session 3 of user core. Aug 13 07:15:38.029519 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:15:38.541523 systemd[1]: Started sshd@1-10.200.4.34:22-10.200.16.10:55976.service - OpenSSH per-connection server daemon (10.200.16.10:55976). Aug 13 07:15:39.124324 sshd[2312]: Accepted publickey for core from 10.200.16.10 port 55976 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:15:39.126130 sshd[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:15:39.131313 systemd-logind[1793]: New session 4 of user core. Aug 13 07:15:39.137500 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:15:39.553360 sshd[2312]: pam_unix(sshd:session): session closed for user core Aug 13 07:15:39.557105 systemd[1]: sshd@1-10.200.4.34:22-10.200.16.10:55976.service: Deactivated successfully. Aug 13 07:15:39.562794 systemd-logind[1793]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:15:39.563345 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:15:39.564712 systemd-logind[1793]: Removed session 4. Aug 13 07:15:39.662906 systemd[1]: Started sshd@2-10.200.4.34:22-10.200.16.10:55992.service - OpenSSH per-connection server daemon (10.200.16.10:55992). Aug 13 07:15:40.246455 sshd[2320]: Accepted publickey for core from 10.200.16.10 port 55992 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:15:40.248355 sshd[2320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:15:40.254303 systemd-logind[1793]: New session 5 of user core. Aug 13 07:15:40.263448 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:15:40.679650 sshd[2320]: pam_unix(sshd:session): session closed for user core Aug 13 07:15:40.684699 systemd[1]: sshd@2-10.200.4.34:22-10.200.16.10:55992.service: Deactivated successfully. Aug 13 07:15:40.689467 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:15:40.690240 systemd-logind[1793]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:15:40.691198 systemd-logind[1793]: Removed session 5. Aug 13 07:15:40.780842 systemd[1]: Started sshd@3-10.200.4.34:22-10.200.16.10:40984.service - OpenSSH per-connection server daemon (10.200.16.10:40984). Aug 13 07:15:41.364098 sshd[2328]: Accepted publickey for core from 10.200.16.10 port 40984 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:15:41.365932 sshd[2328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:15:41.370475 systemd-logind[1793]: New session 6 of user core. Aug 13 07:15:41.380469 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:15:41.792387 sshd[2328]: pam_unix(sshd:session): session closed for user core Aug 13 07:15:41.796480 systemd[1]: sshd@3-10.200.4.34:22-10.200.16.10:40984.service: Deactivated successfully. Aug 13 07:15:41.801863 systemd-logind[1793]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:15:41.802678 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:15:41.803635 systemd-logind[1793]: Removed session 6. Aug 13 07:15:41.899836 systemd[1]: Started sshd@4-10.200.4.34:22-10.200.16.10:40998.service - OpenSSH per-connection server daemon (10.200.16.10:40998). Aug 13 07:15:42.488461 sshd[2336]: Accepted publickey for core from 10.200.16.10 port 40998 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:15:42.490861 sshd[2336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:15:42.495599 systemd-logind[1793]: New session 7 of user core. Aug 13 07:15:42.501516 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:15:43.034369 sudo[2340]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:15:43.034758 sudo[2340]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:15:43.063610 sudo[2340]: pam_unix(sudo:session): session closed for user root Aug 13 07:15:43.159404 sshd[2336]: pam_unix(sshd:session): session closed for user core Aug 13 07:15:43.163600 systemd[1]: sshd@4-10.200.4.34:22-10.200.16.10:40998.service: Deactivated successfully. Aug 13 07:15:43.170169 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:15:43.171118 systemd-logind[1793]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:15:43.172178 systemd-logind[1793]: Removed session 7. Aug 13 07:15:43.260908 systemd[1]: Started sshd@5-10.200.4.34:22-10.200.16.10:41006.service - OpenSSH per-connection server daemon (10.200.16.10:41006). Aug 13 07:15:43.845753 sshd[2345]: Accepted publickey for core from 10.200.16.10 port 41006 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:15:43.847594 sshd[2345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:15:43.852255 systemd-logind[1793]: New session 8 of user core. Aug 13 07:15:43.858478 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:15:44.173698 sudo[2350]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:15:44.174197 sudo[2350]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:15:44.178096 sudo[2350]: pam_unix(sudo:session): session closed for user root Aug 13 07:15:44.183569 sudo[2349]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:15:44.183944 sudo[2349]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:15:44.198809 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:15:44.200896 auditctl[2353]: No rules Aug 13 07:15:44.201498 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:15:44.201836 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:15:44.210611 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:15:44.234895 augenrules[2372]: No rules Aug 13 07:15:44.236788 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:15:44.239366 sudo[2349]: pam_unix(sudo:session): session closed for user root Aug 13 07:15:44.343647 sshd[2345]: pam_unix(sshd:session): session closed for user core Aug 13 07:15:44.347234 systemd[1]: sshd@5-10.200.4.34:22-10.200.16.10:41006.service: Deactivated successfully. Aug 13 07:15:44.351766 systemd-logind[1793]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:15:44.353085 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:15:44.354906 systemd-logind[1793]: Removed session 8. Aug 13 07:15:44.446826 systemd[1]: Started sshd@6-10.200.4.34:22-10.200.16.10:41014.service - OpenSSH per-connection server daemon (10.200.16.10:41014). Aug 13 07:15:45.029405 sshd[2381]: Accepted publickey for core from 10.200.16.10 port 41014 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:15:45.031169 sshd[2381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:15:45.036845 systemd-logind[1793]: New session 9 of user core. Aug 13 07:15:45.043451 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:15:45.357382 sudo[2385]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:15:45.357849 sudo[2385]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:15:46.665490 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:15:46.667105 (dockerd)[2401]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:15:47.566557 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 07:15:47.573867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:15:48.442305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:15:48.456689 (kubelet)[2417]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:15:48.495586 kubelet[2417]: E0813 07:15:48.495188 2417 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:15:48.498364 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:15:48.498577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:15:49.009821 dockerd[2401]: time="2025-08-13T07:15:49.009756106Z" level=info msg="Starting up" Aug 13 07:15:49.274183 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Aug 13 07:15:49.545225 dockerd[2401]: time="2025-08-13T07:15:49.545163017Z" level=info msg="Loading containers: start." Aug 13 07:15:49.743197 kernel: Initializing XFRM netlink socket Aug 13 07:15:50.006123 systemd-networkd[1388]: docker0: Link UP Aug 13 07:15:50.036725 dockerd[2401]: time="2025-08-13T07:15:50.036681370Z" level=info msg="Loading containers: done." Aug 13 07:15:50.113665 dockerd[2401]: time="2025-08-13T07:15:50.113608448Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:15:50.113891 dockerd[2401]: time="2025-08-13T07:15:50.113747950Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:15:50.113940 dockerd[2401]: time="2025-08-13T07:15:50.113907752Z" level=info msg="Daemon has completed initialization" Aug 13 07:15:50.176436 dockerd[2401]: time="2025-08-13T07:15:50.176024442Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:15:50.177025 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:15:51.095622 containerd[1819]: time="2025-08-13T07:15:51.095577740Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 07:15:51.801993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3803159565.mount: Deactivated successfully. Aug 13 07:15:53.342841 containerd[1819]: time="2025-08-13T07:15:53.342781258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:53.345211 containerd[1819]: time="2025-08-13T07:15:53.345130489Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077767" Aug 13 07:15:53.348251 containerd[1819]: time="2025-08-13T07:15:53.348181730Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:53.353372 containerd[1819]: time="2025-08-13T07:15:53.353307099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:53.354796 containerd[1819]: time="2025-08-13T07:15:53.354292012Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 2.258667472s" Aug 13 07:15:53.354796 containerd[1819]: time="2025-08-13T07:15:53.354342113Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 07:15:53.355319 containerd[1819]: time="2025-08-13T07:15:53.355113823Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 07:15:54.918237 containerd[1819]: time="2025-08-13T07:15:54.918182462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:54.921761 containerd[1819]: time="2025-08-13T07:15:54.921488007Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713253" Aug 13 07:15:54.925177 containerd[1819]: time="2025-08-13T07:15:54.924626149Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:54.929893 containerd[1819]: time="2025-08-13T07:15:54.929836918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:54.931352 containerd[1819]: time="2025-08-13T07:15:54.931179436Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.576030012s" Aug 13 07:15:54.931352 containerd[1819]: time="2025-08-13T07:15:54.931229837Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 07:15:54.931963 containerd[1819]: time="2025-08-13T07:15:54.931936547Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 07:15:56.240430 containerd[1819]: time="2025-08-13T07:15:56.240367175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:56.245421 containerd[1819]: time="2025-08-13T07:15:56.245342641Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783708" Aug 13 07:15:56.250268 containerd[1819]: time="2025-08-13T07:15:56.250200506Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:56.256994 containerd[1819]: time="2025-08-13T07:15:56.256919696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:56.258479 containerd[1819]: time="2025-08-13T07:15:56.257949010Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.325972863s" Aug 13 07:15:56.258479 containerd[1819]: time="2025-08-13T07:15:56.257991411Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 07:15:56.258854 containerd[1819]: time="2025-08-13T07:15:56.258822222Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 07:15:57.581832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1823280274.mount: Deactivated successfully. Aug 13 07:15:58.099795 containerd[1819]: time="2025-08-13T07:15:58.099728483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:58.102297 containerd[1819]: time="2025-08-13T07:15:58.102233616Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383620" Aug 13 07:15:58.105032 containerd[1819]: time="2025-08-13T07:15:58.104968553Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:58.109489 containerd[1819]: time="2025-08-13T07:15:58.109411713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:58.110581 containerd[1819]: time="2025-08-13T07:15:58.110069321Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 1.851212999s" Aug 13 07:15:58.110581 containerd[1819]: time="2025-08-13T07:15:58.110113322Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 07:15:58.110823 containerd[1819]: time="2025-08-13T07:15:58.110729930Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 07:15:58.169888 update_engine[1799]: I20250813 07:15:58.169759 1799 update_attempter.cc:509] Updating boot flags... Aug 13 07:15:58.255180 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2637) Aug 13 07:15:58.393177 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2638) Aug 13 07:15:58.566393 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 07:15:58.573416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:15:58.895426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:15:58.899661 (kubelet)[2703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:15:59.383660 kubelet[2703]: E0813 07:15:59.383602 2703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:15:59.386054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:15:59.386356 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:15:59.434592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1003562733.mount: Deactivated successfully. Aug 13 07:16:00.677414 containerd[1819]: time="2025-08-13T07:16:00.677355513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:00.680339 containerd[1819]: time="2025-08-13T07:16:00.680268652Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Aug 13 07:16:00.683419 containerd[1819]: time="2025-08-13T07:16:00.683334193Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:00.689209 containerd[1819]: time="2025-08-13T07:16:00.689105571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:00.690433 containerd[1819]: time="2025-08-13T07:16:00.690256386Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.579488055s" Aug 13 07:16:00.690433 containerd[1819]: time="2025-08-13T07:16:00.690305487Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 07:16:00.691381 containerd[1819]: time="2025-08-13T07:16:00.691057597Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:16:01.305726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1571411311.mount: Deactivated successfully. Aug 13 07:16:01.329028 containerd[1819]: time="2025-08-13T07:16:01.328970194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:01.331868 containerd[1819]: time="2025-08-13T07:16:01.331792258Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Aug 13 07:16:01.334673 containerd[1819]: time="2025-08-13T07:16:01.334611522Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:01.340841 containerd[1819]: time="2025-08-13T07:16:01.340767761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:01.341651 containerd[1819]: time="2025-08-13T07:16:01.341498678Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 650.401481ms" Aug 13 07:16:01.341651 containerd[1819]: time="2025-08-13T07:16:01.341541979Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:16:01.342424 containerd[1819]: time="2025-08-13T07:16:01.342261595Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 07:16:02.038032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4159085408.mount: Deactivated successfully. Aug 13 07:16:04.258080 containerd[1819]: time="2025-08-13T07:16:04.258016359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:04.261531 containerd[1819]: time="2025-08-13T07:16:04.261440237Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Aug 13 07:16:04.264739 containerd[1819]: time="2025-08-13T07:16:04.264674210Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:04.270302 containerd[1819]: time="2025-08-13T07:16:04.270243037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:04.271626 containerd[1819]: time="2025-08-13T07:16:04.271442564Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.929143268s" Aug 13 07:16:04.271626 containerd[1819]: time="2025-08-13T07:16:04.271492865Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 07:16:07.042417 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:07.048469 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:16:07.093920 systemd[1]: Reloading requested from client PID 2850 ('systemctl') (unit session-9.scope)... Aug 13 07:16:07.093938 systemd[1]: Reloading... Aug 13 07:16:07.214201 zram_generator::config[2890]: No configuration found. Aug 13 07:16:07.350608 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:16:07.450089 systemd[1]: Reloading finished in 355 ms. Aug 13 07:16:07.499536 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:16:07.499927 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:16:07.500644 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:07.509538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:16:07.830362 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:07.837562 (kubelet)[2970]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:16:07.874190 kubelet[2970]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:16:07.874190 kubelet[2970]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:16:07.874190 kubelet[2970]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:16:07.874741 kubelet[2970]: I0813 07:16:07.874256 2970 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:16:08.248477 kubelet[2970]: I0813 07:16:08.248346 2970 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:16:08.248477 kubelet[2970]: I0813 07:16:08.248382 2970 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:16:08.249100 kubelet[2970]: I0813 07:16:08.248709 2970 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:16:08.572783 kubelet[2970]: E0813 07:16:08.572734 2970 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:16:08.574179 kubelet[2970]: I0813 07:16:08.573878 2970 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:16:08.585532 kubelet[2970]: E0813 07:16:08.585485 2970 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:16:08.585532 kubelet[2970]: I0813 07:16:08.585526 2970 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:16:08.590860 kubelet[2970]: I0813 07:16:08.590527 2970 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:16:08.595369 kubelet[2970]: I0813 07:16:08.595329 2970 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:16:08.595602 kubelet[2970]: I0813 07:16:08.595546 2970 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:16:08.595811 kubelet[2970]: I0813 07:16:08.595598 2970 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-a-0c3b310332","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 07:16:08.595978 kubelet[2970]: I0813 07:16:08.595821 2970 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:16:08.595978 kubelet[2970]: I0813 07:16:08.595835 2970 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:16:08.596055 kubelet[2970]: I0813 07:16:08.595984 2970 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:16:08.599087 kubelet[2970]: I0813 07:16:08.598763 2970 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:16:08.599087 kubelet[2970]: I0813 07:16:08.598807 2970 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:16:08.599087 kubelet[2970]: I0813 07:16:08.598852 2970 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:16:08.599087 kubelet[2970]: I0813 07:16:08.598875 2970 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:16:08.606182 kubelet[2970]: W0813 07:16:08.605512 2970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-a-0c3b310332&limit=500&resourceVersion=0": dial tcp 10.200.4.34:6443: connect: connection refused Aug 13 07:16:08.606182 kubelet[2970]: E0813 07:16:08.605609 2970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-a-0c3b310332&limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:16:08.606182 kubelet[2970]: I0813 07:16:08.605715 2970 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:16:08.606496 kubelet[2970]: I0813 07:16:08.606479 2970 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:16:08.607412 kubelet[2970]: W0813 07:16:08.607387 2970 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:16:08.607778 kubelet[2970]: W0813 07:16:08.607722 2970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.34:6443: connect: connection refused Aug 13 07:16:08.607861 kubelet[2970]: E0813 07:16:08.607801 2970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:16:08.609804 kubelet[2970]: I0813 07:16:08.609785 2970 server.go:1274] "Started kubelet" Aug 13 07:16:08.611486 kubelet[2970]: I0813 07:16:08.611459 2970 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:16:08.618239 kubelet[2970]: E0813 07:16:08.615846 2970 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.34:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-a-0c3b310332.185b424ac0273669 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-a-0c3b310332,UID:ci-4081.3.5-a-0c3b310332,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-a-0c3b310332,},FirstTimestamp:2025-08-13 07:16:08.609748585 +0000 UTC m=+0.768549178,LastTimestamp:2025-08-13 07:16:08.609748585 +0000 UTC m=+0.768549178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-a-0c3b310332,}" Aug 13 07:16:08.618239 kubelet[2970]: I0813 07:16:08.617526 2970 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:16:08.618620 kubelet[2970]: I0813 07:16:08.618603 2970 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:16:08.618985 kubelet[2970]: E0813 07:16:08.618962 2970 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-0c3b310332\" not found" Aug 13 07:16:08.619308 kubelet[2970]: I0813 07:16:08.619289 2970 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:16:08.621986 kubelet[2970]: I0813 07:16:08.621945 2970 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:16:08.622222 kubelet[2970]: I0813 07:16:08.622205 2970 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:16:08.622293 kubelet[2970]: I0813 07:16:08.622266 2970 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:16:08.623176 kubelet[2970]: I0813 07:16:08.622575 2970 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:16:08.623176 kubelet[2970]: I0813 07:16:08.622838 2970 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:16:08.625435 kubelet[2970]: E0813 07:16:08.625396 2970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-0c3b310332?timeout=10s\": dial tcp 10.200.4.34:6443: connect: connection refused" interval="200ms" Aug 13 07:16:08.625843 kubelet[2970]: I0813 07:16:08.625824 2970 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:16:08.626057 kubelet[2970]: I0813 07:16:08.626039 2970 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:16:08.627541 kubelet[2970]: W0813 07:16:08.627497 2970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.34:6443: connect: connection refused Aug 13 07:16:08.627611 kubelet[2970]: E0813 07:16:08.627562 2970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:16:08.627820 kubelet[2970]: I0813 07:16:08.627796 2970 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:16:08.676261 kubelet[2970]: I0813 07:16:08.676231 2970 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:16:08.676454 kubelet[2970]: I0813 07:16:08.676328 2970 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:16:08.676454 kubelet[2970]: I0813 07:16:08.676355 2970 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:16:08.684281 kubelet[2970]: I0813 07:16:08.684243 2970 policy_none.go:49] "None policy: Start" Aug 13 07:16:08.685822 kubelet[2970]: I0813 07:16:08.685204 2970 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:16:08.685822 kubelet[2970]: I0813 07:16:08.685235 2970 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:16:08.686381 kubelet[2970]: I0813 07:16:08.686348 2970 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:16:08.688031 kubelet[2970]: I0813 07:16:08.688011 2970 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:16:08.688863 kubelet[2970]: I0813 07:16:08.688082 2970 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:16:08.688863 kubelet[2970]: I0813 07:16:08.688107 2970 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:16:08.688863 kubelet[2970]: E0813 07:16:08.688238 2970 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:16:08.694827 kubelet[2970]: I0813 07:16:08.694656 2970 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:16:08.694975 kubelet[2970]: I0813 07:16:08.694916 2970 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:16:08.694975 kubelet[2970]: I0813 07:16:08.694932 2970 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:16:08.697013 kubelet[2970]: I0813 07:16:08.696836 2970 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:16:08.697580 kubelet[2970]: W0813 07:16:08.697544 2970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.34:6443: connect: connection refused Aug 13 07:16:08.697783 kubelet[2970]: E0813 07:16:08.697596 2970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:16:08.700916 kubelet[2970]: E0813 07:16:08.700870 2970 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-a-0c3b310332\" not found" Aug 13 07:16:08.797988 kubelet[2970]: I0813 07:16:08.797958 2970 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-a-0c3b310332" Aug 13 07:16:08.798559 kubelet[2970]: E0813 07:16:08.798526 2970 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.34:6443/api/v1/nodes\": dial tcp 10.200.4.34:6443: connect: connection refused" node="ci-4081.3.5-a-0c3b310332" Aug 13 07:16:08.824514 kubelet[2970]: I0813 07:16:08.824308 2970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bb2322e5a555d79527eeadbc86dc0b8a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-a-0c3b310332\" (UID: \"bb2322e5a555d79527eeadbc86dc0b8a\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-0c3b310332" Aug 13 07:16:08.824514 kubelet[2970]: I0813 07:16:08.824356 2970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb2322e5a555d79527eeadbc86dc0b8a-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-0c3b310332\" (UID: \"bb2322e5a555d79527eeadbc86dc0b8a\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-0c3b310332" Aug 13 07:16:08.824514 kubelet[2970]: I0813 07:16:08.824380 2970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca04c47a21991bf0aa87bfacd57bef76-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-a-0c3b310332\" (UID: \"ca04c47a21991bf0aa87bfacd57bef76\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-0c3b310332" Aug 13 07:16:08.824514 kubelet[2970]: I0813 07:16:08.824402 2970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca04c47a21991bf0aa87bfacd57bef76-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-a-0c3b310332\" (UID: \"ca04c47a21991bf0aa87bfacd57bef76\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-0c3b310332" Aug 13 07:16:08.824514 kubelet[2970]: I0813 07:16:08.824424 2970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca04c47a21991bf0aa87bfacd57bef76-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-a-0c3b310332\" (UID: \"ca04c47a21991bf0aa87bfacd57bef76\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-0c3b310332" Aug 13 07:16:08.824872 kubelet[2970]: I0813 07:16:08.824442 2970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb2322e5a555d79527eeadbc86dc0b8a-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-0c3b310332\" (UID: \"bb2322e5a555d79527eeadbc86dc0b8a\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-0c3b310332" Aug 13 07:16:08.826815 kubelet[2970]: E0813 07:16:08.826772 2970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-0c3b310332?timeout=10s\": dial tcp 10.200.4.34:6443: connect: connection refused" interval="400ms" Aug 13 07:16:08.925341 kubelet[2970]: I0813 07:16:08.925295 2970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c8da0cc7fad83026aca669322d07bbf7-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-a-0c3b310332\" (UID: \"c8da0cc7fad83026aca669322d07bbf7\") " pod="kube-system/kube-scheduler-ci-4081.3.5-a-0c3b310332" Aug 13 07:16:08.925341 kubelet[2970]: I0813 07:16:08.925341 2970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb2322e5a555d79527eeadbc86dc0b8a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-a-0c3b310332\" (UID: \"bb2322e5a555d79527eeadbc86dc0b8a\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-0c3b310332" Aug 13 07:16:08.925860 kubelet[2970]: I0813 07:16:08.925419 2970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bb2322e5a555d79527eeadbc86dc0b8a-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-a-0c3b310332\" (UID: \"bb2322e5a555d79527eeadbc86dc0b8a\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-0c3b310332" Aug 13 07:16:09.001715 kubelet[2970]: I0813 07:16:09.001679 2970 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-a-0c3b310332" Aug 13 07:16:09.002079 kubelet[2970]: E0813 07:16:09.002044 2970 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.34:6443/api/v1/nodes\": dial tcp 10.200.4.34:6443: connect: connection refused" node="ci-4081.3.5-a-0c3b310332" Aug 13 07:16:09.099419 containerd[1819]: time="2025-08-13T07:16:09.099283083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-a-0c3b310332,Uid:ca04c47a21991bf0aa87bfacd57bef76,Namespace:kube-system,Attempt:0,}" Aug 13 07:16:09.105795 containerd[1819]: time="2025-08-13T07:16:09.105733359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-a-0c3b310332,Uid:bb2322e5a555d79527eeadbc86dc0b8a,Namespace:kube-system,Attempt:0,}" Aug 13 07:16:09.106074 containerd[1819]: time="2025-08-13T07:16:09.105731659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-a-0c3b310332,Uid:c8da0cc7fad83026aca669322d07bbf7,Namespace:kube-system,Attempt:0,}" Aug 13 07:16:09.227611 kubelet[2970]: E0813 07:16:09.227550 2970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-0c3b310332?timeout=10s\": dial tcp 10.200.4.34:6443: connect: connection refused" interval="800ms" Aug 13 07:16:09.404468 kubelet[2970]: I0813 07:16:09.404362 2970 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-a-0c3b310332" Aug 13 07:16:09.404805 kubelet[2970]: E0813 07:16:09.404768 2970 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.34:6443/api/v1/nodes\": dial tcp 10.200.4.34:6443: connect: connection refused" node="ci-4081.3.5-a-0c3b310332" Aug 13 07:16:09.588544 kubelet[2970]: W0813 07:16:09.588470 2970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.34:6443: connect: connection refused Aug 13 07:16:09.588699 kubelet[2970]: E0813 07:16:09.588553 2970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:16:09.755858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount670541040.mount: Deactivated successfully. Aug 13 07:16:09.788032 containerd[1819]: time="2025-08-13T07:16:09.787963925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:16:09.791246 containerd[1819]: time="2025-08-13T07:16:09.791183763Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Aug 13 07:16:09.793730 containerd[1819]: time="2025-08-13T07:16:09.793682993Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:16:09.796714 containerd[1819]: time="2025-08-13T07:16:09.796665128Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:16:09.799284 containerd[1819]: time="2025-08-13T07:16:09.799156258Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:16:09.804660 containerd[1819]: time="2025-08-13T07:16:09.804608122Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:16:09.807168 containerd[1819]: time="2025-08-13T07:16:09.806900649Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:16:09.812330 containerd[1819]: time="2025-08-13T07:16:09.812257013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:16:09.813612 containerd[1819]: time="2025-08-13T07:16:09.813281825Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 707.311963ms" Aug 13 07:16:09.814350 containerd[1819]: time="2025-08-13T07:16:09.814313137Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 714.941253ms" Aug 13 07:16:09.822825 containerd[1819]: time="2025-08-13T07:16:09.822602235Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 716.772775ms" Aug 13 07:16:09.823004 kubelet[2970]: W0813 07:16:09.822932 2970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.34:6443: connect: connection refused Aug 13 07:16:09.823066 kubelet[2970]: E0813 07:16:09.823024 2970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:16:10.024937 kubelet[2970]: W0813 07:16:10.024799 2970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.34:6443: connect: connection refused Aug 13 07:16:10.024937 kubelet[2970]: E0813 07:16:10.024854 2970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:16:10.029000 kubelet[2970]: E0813 07:16:10.028950 2970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-0c3b310332?timeout=10s\": dial tcp 10.200.4.34:6443: connect: connection refused" interval="1.6s" Aug 13 07:16:10.095706 kubelet[2970]: W0813 07:16:10.095637 2970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-a-0c3b310332&limit=500&resourceVersion=0": dial tcp 10.200.4.34:6443: connect: connection refused Aug 13 07:16:10.095706 kubelet[2970]: E0813 07:16:10.095713 2970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-a-0c3b310332&limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:16:10.207633 kubelet[2970]: I0813 07:16:10.207590 2970 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-a-0c3b310332" Aug 13 07:16:10.208026 kubelet[2970]: E0813 07:16:10.207989 2970 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.34:6443/api/v1/nodes\": dial tcp 10.200.4.34:6443: connect: connection refused" node="ci-4081.3.5-a-0c3b310332" Aug 13 07:16:10.649537 kubelet[2970]: E0813 07:16:10.649402 2970 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:16:10.658772 containerd[1819]: time="2025-08-13T07:16:10.658602419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:16:10.659264 containerd[1819]: time="2025-08-13T07:16:10.658855622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:16:10.659921 containerd[1819]: time="2025-08-13T07:16:10.659545930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:10.661632 containerd[1819]: time="2025-08-13T07:16:10.661240250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:10.661965 containerd[1819]: time="2025-08-13T07:16:10.661686655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:16:10.661965 containerd[1819]: time="2025-08-13T07:16:10.661743756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:16:10.661965 containerd[1819]: time="2025-08-13T07:16:10.661763156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:10.661965 containerd[1819]: time="2025-08-13T07:16:10.661863157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:10.671515 containerd[1819]: time="2025-08-13T07:16:10.671403970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:16:10.671681 containerd[1819]: time="2025-08-13T07:16:10.671541872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:16:10.671681 containerd[1819]: time="2025-08-13T07:16:10.671573872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:10.671866 containerd[1819]: time="2025-08-13T07:16:10.671742574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:10.788775 containerd[1819]: time="2025-08-13T07:16:10.788722057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-a-0c3b310332,Uid:c8da0cc7fad83026aca669322d07bbf7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e54b4e963aa948d67d825a075b108995a0d06b4b8ac9ed8a66f189127a3b510f\"" Aug 13 07:16:10.795054 containerd[1819]: time="2025-08-13T07:16:10.794876530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-a-0c3b310332,Uid:bb2322e5a555d79527eeadbc86dc0b8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"08d8c6a7e25ada817eb1c62375a5a3f86bca49284e1f340f3940a0ed6bd42754\"" Aug 13 07:16:10.797276 containerd[1819]: time="2025-08-13T07:16:10.797098556Z" level=info msg="CreateContainer within sandbox \"e54b4e963aa948d67d825a075b108995a0d06b4b8ac9ed8a66f189127a3b510f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:16:10.802097 containerd[1819]: time="2025-08-13T07:16:10.801920213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-a-0c3b310332,Uid:ca04c47a21991bf0aa87bfacd57bef76,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf12246232efc929a5e9741277856a4a2e827957b5584ff0f7ecd5e1c11e62b5\"" Aug 13 07:16:10.805711 containerd[1819]: time="2025-08-13T07:16:10.805584657Z" level=info msg="CreateContainer within sandbox \"08d8c6a7e25ada817eb1c62375a5a3f86bca49284e1f340f3940a0ed6bd42754\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:16:10.806560 containerd[1819]: time="2025-08-13T07:16:10.806417567Z" level=info msg="CreateContainer within sandbox \"bf12246232efc929a5e9741277856a4a2e827957b5584ff0f7ecd5e1c11e62b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:16:10.854555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3466424057.mount: Deactivated successfully. Aug 13 07:16:10.879005 containerd[1819]: time="2025-08-13T07:16:10.878953124Z" level=info msg="CreateContainer within sandbox \"08d8c6a7e25ada817eb1c62375a5a3f86bca49284e1f340f3940a0ed6bd42754\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d5e68948ba83be36d2451c21cde585cc88beac15d38b7d2ae3c83020de06dac5\"" Aug 13 07:16:10.879792 containerd[1819]: time="2025-08-13T07:16:10.879758434Z" level=info msg="StartContainer for \"d5e68948ba83be36d2451c21cde585cc88beac15d38b7d2ae3c83020de06dac5\"" Aug 13 07:16:10.884406 containerd[1819]: time="2025-08-13T07:16:10.884040784Z" level=info msg="CreateContainer within sandbox \"e54b4e963aa948d67d825a075b108995a0d06b4b8ac9ed8a66f189127a3b510f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"59d773d89d3047c05ec9a0b541b3932dc37a403517414eed1a8ae3eedae1afa1\"" Aug 13 07:16:10.884885 containerd[1819]: time="2025-08-13T07:16:10.884840994Z" level=info msg="StartContainer for \"59d773d89d3047c05ec9a0b541b3932dc37a403517414eed1a8ae3eedae1afa1\"" Aug 13 07:16:10.886475 containerd[1819]: time="2025-08-13T07:16:10.886437013Z" level=info msg="CreateContainer within sandbox \"bf12246232efc929a5e9741277856a4a2e827957b5584ff0f7ecd5e1c11e62b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"53a4c506999b9fb659cd304b153294949eb583f360b57fd2f74fe8446ea08c6f\"" Aug 13 07:16:10.887189 containerd[1819]: time="2025-08-13T07:16:10.887128021Z" level=info msg="StartContainer for \"53a4c506999b9fb659cd304b153294949eb583f360b57fd2f74fe8446ea08c6f\"" Aug 13 07:16:11.010365 containerd[1819]: time="2025-08-13T07:16:11.010209276Z" level=info msg="StartContainer for \"53a4c506999b9fb659cd304b153294949eb583f360b57fd2f74fe8446ea08c6f\" returns successfully" Aug 13 07:16:11.076972 containerd[1819]: time="2025-08-13T07:16:11.076917665Z" level=info msg="StartContainer for \"d5e68948ba83be36d2451c21cde585cc88beac15d38b7d2ae3c83020de06dac5\" returns successfully" Aug 13 07:16:11.097669 containerd[1819]: time="2025-08-13T07:16:11.097618509Z" level=info msg="StartContainer for \"59d773d89d3047c05ec9a0b541b3932dc37a403517414eed1a8ae3eedae1afa1\" returns successfully" Aug 13 07:16:11.813967 kubelet[2970]: I0813 07:16:11.813585 2970 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-a-0c3b310332" Aug 13 07:16:13.139736 kubelet[2970]: E0813 07:16:13.139685 2970 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.5-a-0c3b310332\" not found" node="ci-4081.3.5-a-0c3b310332" Aug 13 07:16:13.170048 kubelet[2970]: I0813 07:16:13.170005 2970 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.5-a-0c3b310332" Aug 13 07:16:13.610427 kubelet[2970]: I0813 07:16:13.610384 2970 apiserver.go:52] "Watching apiserver" Aug 13 07:16:13.622906 kubelet[2970]: I0813 07:16:13.622854 2970 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:16:15.507522 systemd[1]: Reloading requested from client PID 3241 ('systemctl') (unit session-9.scope)... Aug 13 07:16:15.507540 systemd[1]: Reloading... Aug 13 07:16:15.603199 zram_generator::config[3278]: No configuration found. Aug 13 07:16:15.746643 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:16:15.836532 systemd[1]: Reloading finished in 328 ms. Aug 13 07:16:15.882416 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:16:15.889345 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:16:15.889870 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:15.897952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:16:16.401800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:16.413672 (kubelet)[3358]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:16:16.676358 kubelet[3358]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:16:16.676358 kubelet[3358]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:16:16.676358 kubelet[3358]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:16:16.676850 kubelet[3358]: I0813 07:16:16.676363 3358 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:16:16.684572 kubelet[3358]: I0813 07:16:16.684529 3358 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:16:16.684572 kubelet[3358]: I0813 07:16:16.684560 3358 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:16:16.684874 kubelet[3358]: I0813 07:16:16.684854 3358 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:16:16.686439 kubelet[3358]: I0813 07:16:16.686403 3358 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 07:16:16.688921 kubelet[3358]: I0813 07:16:16.688636 3358 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:16:16.699013 kubelet[3358]: E0813 07:16:16.698969 3358 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:16:16.699013 kubelet[3358]: I0813 07:16:16.699010 3358 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:16:16.703100 kubelet[3358]: I0813 07:16:16.702647 3358 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:16:16.705080 kubelet[3358]: I0813 07:16:16.704621 3358 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:16:16.705080 kubelet[3358]: I0813 07:16:16.704785 3358 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:16:16.705080 kubelet[3358]: I0813 07:16:16.704819 3358 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-a-0c3b310332","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 07:16:16.705080 kubelet[3358]: I0813 07:16:16.705059 3358 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:16:16.705445 kubelet[3358]: I0813 07:16:16.705075 3358 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:16:16.705445 kubelet[3358]: I0813 07:16:16.705114 3358 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:16:16.705445 kubelet[3358]: I0813 07:16:16.705293 3358 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:16:16.705445 kubelet[3358]: I0813 07:16:16.705311 3358 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:16:16.708785 kubelet[3358]: I0813 07:16:16.706218 3358 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:16:16.708785 kubelet[3358]: I0813 07:16:16.708179 3358 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:16:16.709849 kubelet[3358]: I0813 07:16:16.709470 3358 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:16:16.710393 kubelet[3358]: I0813 07:16:16.709963 3358 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:16:16.712749 kubelet[3358]: I0813 07:16:16.712357 3358 server.go:1274] "Started kubelet" Aug 13 07:16:16.716626 kubelet[3358]: I0813 07:16:16.716595 3358 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:16:16.725909 kubelet[3358]: I0813 07:16:16.725857 3358 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:16:16.727219 kubelet[3358]: I0813 07:16:16.727008 3358 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:16:16.730918 kubelet[3358]: I0813 07:16:16.728337 3358 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:16:16.730918 kubelet[3358]: I0813 07:16:16.728570 3358 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:16:16.730918 kubelet[3358]: I0813 07:16:16.728827 3358 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:16:16.731139 kubelet[3358]: I0813 07:16:16.731076 3358 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:16:16.734196 kubelet[3358]: E0813 07:16:16.731383 3358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-0c3b310332\" not found" Aug 13 07:16:16.735690 kubelet[3358]: I0813 07:16:16.735381 3358 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:16:16.735690 kubelet[3358]: I0813 07:16:16.735515 3358 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:16:16.744472 kubelet[3358]: I0813 07:16:16.744430 3358 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:16:16.747008 kubelet[3358]: I0813 07:16:16.746979 3358 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:16:16.747593 kubelet[3358]: I0813 07:16:16.747184 3358 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:16:16.747593 kubelet[3358]: I0813 07:16:16.747214 3358 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:16:16.747593 kubelet[3358]: E0813 07:16:16.747270 3358 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:16:16.750486 kubelet[3358]: I0813 07:16:16.750456 3358 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:16:16.750732 kubelet[3358]: I0813 07:16:16.750578 3358 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:16:16.764503 kubelet[3358]: I0813 07:16:16.764471 3358 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:16:16.770120 kubelet[3358]: E0813 07:16:16.770029 3358 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:16:16.826337 kubelet[3358]: I0813 07:16:16.826300 3358 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:16:16.826337 kubelet[3358]: I0813 07:16:16.826336 3358 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:16:16.826514 kubelet[3358]: I0813 07:16:16.826359 3358 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:16:16.826586 kubelet[3358]: I0813 07:16:16.826543 3358 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:16:16.826586 kubelet[3358]: I0813 07:16:16.826558 3358 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:16:16.826586 kubelet[3358]: I0813 07:16:16.826583 3358 policy_none.go:49] "None policy: Start" Aug 13 07:16:16.827328 kubelet[3358]: I0813 07:16:16.827309 3358 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:16:16.827734 kubelet[3358]: I0813 07:16:16.827484 3358 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:16:16.827734 kubelet[3358]: I0813 07:16:16.827662 3358 state_mem.go:75] "Updated machine memory state" Aug 13 07:16:16.829379 kubelet[3358]: I0813 07:16:16.829352 3358 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:16:16.829724 kubelet[3358]: I0813 07:16:16.829562 3358 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:16:16.829724 kubelet[3358]: I0813 07:16:16.829580 3358 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:16:16.832376 kubelet[3358]: I0813 07:16:16.832354 3358 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:16:16.867261 kubelet[3358]: W0813 07:16:16.867228 3358 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:16:16.873972 kubelet[3358]: W0813 07:16:16.873762 3358 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:16:16.873972 kubelet[3358]: W0813 07:16:16.873869 3358 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:16:16.934016 kubelet[3358]: I0813 07:16:16.932986 3358 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-a-0c3b310332" Aug 13 07:16:16.953639 kubelet[3358]: I0813 07:16:16.952478 3358 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.5-a-0c3b310332" Aug 13 07:16:16.954207 kubelet[3358]: I0813 07:16:16.953886 3358 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.5-a-0c3b310332" Aug 13 07:16:17.037784 kubelet[3358]: I0813 07:16:17.037275 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca04c47a21991bf0aa87bfacd57bef76-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-a-0c3b310332\" (UID: \"ca04c47a21991bf0aa87bfacd57bef76\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-0c3b310332" Aug 13 07:16:17.037784 kubelet[3358]: I0813 07:16:17.037333 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca04c47a21991bf0aa87bfacd57bef76-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-a-0c3b310332\" (UID: \"ca04c47a21991bf0aa87bfacd57bef76\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-0c3b310332" Aug 13 07:16:17.037784 kubelet[3358]: I0813 07:16:17.037371 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bb2322e5a555d79527eeadbc86dc0b8a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-a-0c3b310332\" (UID: \"bb2322e5a555d79527eeadbc86dc0b8a\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-0c3b310332" Aug 13 07:16:17.037784 kubelet[3358]: I0813 07:16:17.037394 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb2322e5a555d79527eeadbc86dc0b8a-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-0c3b310332\" (UID: \"bb2322e5a555d79527eeadbc86dc0b8a\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-0c3b310332" Aug 13 07:16:17.037784 kubelet[3358]: I0813 07:16:17.037421 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb2322e5a555d79527eeadbc86dc0b8a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-a-0c3b310332\" (UID: \"bb2322e5a555d79527eeadbc86dc0b8a\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-0c3b310332" Aug 13 07:16:17.038025 kubelet[3358]: I0813 07:16:17.037444 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca04c47a21991bf0aa87bfacd57bef76-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-a-0c3b310332\" (UID: \"ca04c47a21991bf0aa87bfacd57bef76\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-0c3b310332" Aug 13 07:16:17.038025 kubelet[3358]: I0813 07:16:17.037469 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb2322e5a555d79527eeadbc86dc0b8a-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-0c3b310332\" (UID: \"bb2322e5a555d79527eeadbc86dc0b8a\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-0c3b310332" Aug 13 07:16:17.038025 kubelet[3358]: I0813 07:16:17.037493 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bb2322e5a555d79527eeadbc86dc0b8a-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-a-0c3b310332\" (UID: \"bb2322e5a555d79527eeadbc86dc0b8a\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-0c3b310332" Aug 13 07:16:17.038025 kubelet[3358]: I0813 07:16:17.037520 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c8da0cc7fad83026aca669322d07bbf7-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-a-0c3b310332\" (UID: \"c8da0cc7fad83026aca669322d07bbf7\") " pod="kube-system/kube-scheduler-ci-4081.3.5-a-0c3b310332" Aug 13 07:16:17.709631 kubelet[3358]: I0813 07:16:17.709591 3358 apiserver.go:52] "Watching apiserver" Aug 13 07:16:17.735944 kubelet[3358]: I0813 07:16:17.735869 3358 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:16:17.872798 kubelet[3358]: I0813 07:16:17.872556 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-a-0c3b310332" podStartSLOduration=1.87252614 podStartE2EDuration="1.87252614s" podCreationTimestamp="2025-08-13 07:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:16:17.842888644 +0000 UTC m=+1.425392438" watchObservedRunningTime="2025-08-13 07:16:17.87252614 +0000 UTC m=+1.455029934" Aug 13 07:16:17.902307 kubelet[3358]: I0813 07:16:17.901099 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-a-0c3b310332" podStartSLOduration=1.9010765219999999 podStartE2EDuration="1.901076522s" podCreationTimestamp="2025-08-13 07:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:16:17.873313151 +0000 UTC m=+1.455816845" watchObservedRunningTime="2025-08-13 07:16:17.901076522 +0000 UTC m=+1.483580316" Aug 13 07:16:17.925354 kubelet[3358]: I0813 07:16:17.925285 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-a-0c3b310332" podStartSLOduration=1.925248445 podStartE2EDuration="1.925248445s" podCreationTimestamp="2025-08-13 07:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:16:17.902700744 +0000 UTC m=+1.485204538" watchObservedRunningTime="2025-08-13 07:16:17.925248445 +0000 UTC m=+1.507752239" Aug 13 07:16:21.944967 kubelet[3358]: I0813 07:16:21.944931 3358 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:16:21.946021 containerd[1819]: time="2025-08-13T07:16:21.945766612Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:16:21.946946 kubelet[3358]: I0813 07:16:21.946014 3358 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:16:22.872068 kubelet[3358]: I0813 07:16:22.872004 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eff957f5-2d1b-4932-9fc7-5fb9d40edaf7-xtables-lock\") pod \"kube-proxy-f8qs8\" (UID: \"eff957f5-2d1b-4932-9fc7-5fb9d40edaf7\") " pod="kube-system/kube-proxy-f8qs8" Aug 13 07:16:22.872068 kubelet[3358]: I0813 07:16:22.872053 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrxqj\" (UniqueName: \"kubernetes.io/projected/eff957f5-2d1b-4932-9fc7-5fb9d40edaf7-kube-api-access-qrxqj\") pod \"kube-proxy-f8qs8\" (UID: \"eff957f5-2d1b-4932-9fc7-5fb9d40edaf7\") " pod="kube-system/kube-proxy-f8qs8" Aug 13 07:16:22.872368 kubelet[3358]: I0813 07:16:22.872085 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eff957f5-2d1b-4932-9fc7-5fb9d40edaf7-lib-modules\") pod \"kube-proxy-f8qs8\" (UID: \"eff957f5-2d1b-4932-9fc7-5fb9d40edaf7\") " pod="kube-system/kube-proxy-f8qs8" Aug 13 07:16:22.872368 kubelet[3358]: I0813 07:16:22.872107 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eff957f5-2d1b-4932-9fc7-5fb9d40edaf7-kube-proxy\") pod \"kube-proxy-f8qs8\" (UID: \"eff957f5-2d1b-4932-9fc7-5fb9d40edaf7\") " pod="kube-system/kube-proxy-f8qs8" Aug 13 07:16:23.073265 kubelet[3358]: I0813 07:16:23.073187 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/090e3ed1-2f95-450c-825e-77b1138ea0eb-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-4drkf\" (UID: \"090e3ed1-2f95-450c-825e-77b1138ea0eb\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-4drkf" Aug 13 07:16:23.073265 kubelet[3358]: I0813 07:16:23.073242 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkgvf\" (UniqueName: \"kubernetes.io/projected/090e3ed1-2f95-450c-825e-77b1138ea0eb-kube-api-access-xkgvf\") pod \"tigera-operator-5bf8dfcb4-4drkf\" (UID: \"090e3ed1-2f95-450c-825e-77b1138ea0eb\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-4drkf" Aug 13 07:16:23.109662 containerd[1819]: time="2025-08-13T07:16:23.109614876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f8qs8,Uid:eff957f5-2d1b-4932-9fc7-5fb9d40edaf7,Namespace:kube-system,Attempt:0,}" Aug 13 07:16:23.163987 containerd[1819]: time="2025-08-13T07:16:23.163677699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:16:23.163987 containerd[1819]: time="2025-08-13T07:16:23.163738800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:16:23.163987 containerd[1819]: time="2025-08-13T07:16:23.163753000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:23.163987 containerd[1819]: time="2025-08-13T07:16:23.163857701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:23.199975 systemd[1]: run-containerd-runc-k8s.io-05af0239db732b490262ede4f08cf7c2a0b610765552747e2c1a845aa582cebe-runc.4h4ntx.mount: Deactivated successfully. Aug 13 07:16:23.223451 containerd[1819]: time="2025-08-13T07:16:23.223408798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f8qs8,Uid:eff957f5-2d1b-4932-9fc7-5fb9d40edaf7,Namespace:kube-system,Attempt:0,} returns sandbox id \"05af0239db732b490262ede4f08cf7c2a0b610765552747e2c1a845aa582cebe\"" Aug 13 07:16:23.227839 containerd[1819]: time="2025-08-13T07:16:23.227516452Z" level=info msg="CreateContainer within sandbox \"05af0239db732b490262ede4f08cf7c2a0b610765552747e2c1a845aa582cebe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:16:23.279621 containerd[1819]: time="2025-08-13T07:16:23.279564148Z" level=info msg="CreateContainer within sandbox \"05af0239db732b490262ede4f08cf7c2a0b610765552747e2c1a845aa582cebe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"492e657a00ae9be8b0df8c5481d45280ca686cb9ea3d1ba8f44e7a239957022f\"" Aug 13 07:16:23.280646 containerd[1819]: time="2025-08-13T07:16:23.280384259Z" level=info msg="StartContainer for \"492e657a00ae9be8b0df8c5481d45280ca686cb9ea3d1ba8f44e7a239957022f\"" Aug 13 07:16:23.346760 containerd[1819]: time="2025-08-13T07:16:23.346674246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-4drkf,Uid:090e3ed1-2f95-450c-825e-77b1138ea0eb,Namespace:tigera-operator,Attempt:0,}" Aug 13 07:16:23.347192 containerd[1819]: time="2025-08-13T07:16:23.346693146Z" level=info msg="StartContainer for \"492e657a00ae9be8b0df8c5481d45280ca686cb9ea3d1ba8f44e7a239957022f\" returns successfully" Aug 13 07:16:23.387389 containerd[1819]: time="2025-08-13T07:16:23.386913284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:16:23.387389 containerd[1819]: time="2025-08-13T07:16:23.386981585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:16:23.387389 containerd[1819]: time="2025-08-13T07:16:23.387002385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:23.387389 containerd[1819]: time="2025-08-13T07:16:23.387099687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:23.470965 containerd[1819]: time="2025-08-13T07:16:23.470259999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-4drkf,Uid:090e3ed1-2f95-450c-825e-77b1138ea0eb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a72d06cca6205ddd7a69fece289cf6c2d189895d8ec2915ed7f499beee02bea6\"" Aug 13 07:16:23.473855 containerd[1819]: time="2025-08-13T07:16:23.473558343Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 07:16:23.856548 kubelet[3358]: I0813 07:16:23.855562 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f8qs8" podStartSLOduration=1.8555376510000001 podStartE2EDuration="1.855537651s" podCreationTimestamp="2025-08-13 07:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:16:23.836832401 +0000 UTC m=+7.419336095" watchObservedRunningTime="2025-08-13 07:16:23.855537651 +0000 UTC m=+7.438041345" Aug 13 07:16:24.988909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount770084924.mount: Deactivated successfully. Aug 13 07:16:25.916778 containerd[1819]: time="2025-08-13T07:16:25.916668723Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:25.920493 containerd[1819]: time="2025-08-13T07:16:25.920438674Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 07:16:25.926180 containerd[1819]: time="2025-08-13T07:16:25.925762746Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:25.930141 containerd[1819]: time="2025-08-13T07:16:25.930045204Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:25.930967 containerd[1819]: time="2025-08-13T07:16:25.930805414Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.457201171s" Aug 13 07:16:25.930967 containerd[1819]: time="2025-08-13T07:16:25.930846714Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 07:16:25.934166 containerd[1819]: time="2025-08-13T07:16:25.933825854Z" level=info msg="CreateContainer within sandbox \"a72d06cca6205ddd7a69fece289cf6c2d189895d8ec2915ed7f499beee02bea6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 07:16:25.966313 containerd[1819]: time="2025-08-13T07:16:25.966259592Z" level=info msg="CreateContainer within sandbox \"a72d06cca6205ddd7a69fece289cf6c2d189895d8ec2915ed7f499beee02bea6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2b1ef82005d1b77dcef4c632e20ee648f3e9e35019ac0adf3c010e584577909d\"" Aug 13 07:16:25.966998 containerd[1819]: time="2025-08-13T07:16:25.966964101Z" level=info msg="StartContainer for \"2b1ef82005d1b77dcef4c632e20ee648f3e9e35019ac0adf3c010e584577909d\"" Aug 13 07:16:26.032222 containerd[1819]: time="2025-08-13T07:16:26.031825475Z" level=info msg="StartContainer for \"2b1ef82005d1b77dcef4c632e20ee648f3e9e35019ac0adf3c010e584577909d\" returns successfully" Aug 13 07:16:30.435027 kubelet[3358]: I0813 07:16:30.434892 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-4drkf" podStartSLOduration=5.975917732 podStartE2EDuration="8.434869626s" podCreationTimestamp="2025-08-13 07:16:22 +0000 UTC" firstStartedPulling="2025-08-13 07:16:23.472958235 +0000 UTC m=+7.055461929" lastFinishedPulling="2025-08-13 07:16:25.931910029 +0000 UTC m=+9.514413823" observedRunningTime="2025-08-13 07:16:26.838178345 +0000 UTC m=+10.420682039" watchObservedRunningTime="2025-08-13 07:16:30.434869626 +0000 UTC m=+14.017373420" Aug 13 07:16:33.511237 sudo[2385]: pam_unix(sudo:session): session closed for user root Aug 13 07:16:33.614398 sshd[2381]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:33.618848 systemd-logind[1793]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:16:33.620503 systemd[1]: sshd@6-10.200.4.34:22-10.200.16.10:41014.service: Deactivated successfully. Aug 13 07:16:33.625777 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:16:33.630482 systemd-logind[1793]: Removed session 9. Aug 13 07:16:37.979526 kubelet[3358]: I0813 07:16:37.978646 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3364b54f-1f6a-475b-8532-e9424bb71a7a-typha-certs\") pod \"calico-typha-6796f9f49d-cjj2l\" (UID: \"3364b54f-1f6a-475b-8532-e9424bb71a7a\") " pod="calico-system/calico-typha-6796f9f49d-cjj2l" Aug 13 07:16:37.979526 kubelet[3358]: I0813 07:16:37.979321 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3364b54f-1f6a-475b-8532-e9424bb71a7a-tigera-ca-bundle\") pod \"calico-typha-6796f9f49d-cjj2l\" (UID: \"3364b54f-1f6a-475b-8532-e9424bb71a7a\") " pod="calico-system/calico-typha-6796f9f49d-cjj2l" Aug 13 07:16:37.979526 kubelet[3358]: I0813 07:16:37.979352 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhml2\" (UniqueName: \"kubernetes.io/projected/3364b54f-1f6a-475b-8532-e9424bb71a7a-kube-api-access-zhml2\") pod \"calico-typha-6796f9f49d-cjj2l\" (UID: \"3364b54f-1f6a-475b-8532-e9424bb71a7a\") " pod="calico-system/calico-typha-6796f9f49d-cjj2l" Aug 13 07:16:38.181123 kubelet[3358]: I0813 07:16:38.181044 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/28c2ded2-0e4b-4c3c-9002-7bb02e88ef38-policysync\") pod \"calico-node-smzrt\" (UID: \"28c2ded2-0e4b-4c3c-9002-7bb02e88ef38\") " pod="calico-system/calico-node-smzrt" Aug 13 07:16:38.181123 kubelet[3358]: I0813 07:16:38.181126 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/28c2ded2-0e4b-4c3c-9002-7bb02e88ef38-var-run-calico\") pod \"calico-node-smzrt\" (UID: \"28c2ded2-0e4b-4c3c-9002-7bb02e88ef38\") " pod="calico-system/calico-node-smzrt" Aug 13 07:16:38.181701 kubelet[3358]: I0813 07:16:38.181636 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28c2ded2-0e4b-4c3c-9002-7bb02e88ef38-tigera-ca-bundle\") pod \"calico-node-smzrt\" (UID: \"28c2ded2-0e4b-4c3c-9002-7bb02e88ef38\") " pod="calico-system/calico-node-smzrt" Aug 13 07:16:38.182095 kubelet[3358]: I0813 07:16:38.182062 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/28c2ded2-0e4b-4c3c-9002-7bb02e88ef38-cni-bin-dir\") pod \"calico-node-smzrt\" (UID: \"28c2ded2-0e4b-4c3c-9002-7bb02e88ef38\") " pod="calico-system/calico-node-smzrt" Aug 13 07:16:38.182482 kubelet[3358]: I0813 07:16:38.182453 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/28c2ded2-0e4b-4c3c-9002-7bb02e88ef38-cni-net-dir\") pod \"calico-node-smzrt\" (UID: \"28c2ded2-0e4b-4c3c-9002-7bb02e88ef38\") " pod="calico-system/calico-node-smzrt" Aug 13 07:16:38.182615 kubelet[3358]: I0813 07:16:38.182594 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/28c2ded2-0e4b-4c3c-9002-7bb02e88ef38-flexvol-driver-host\") pod \"calico-node-smzrt\" (UID: \"28c2ded2-0e4b-4c3c-9002-7bb02e88ef38\") " pod="calico-system/calico-node-smzrt" Aug 13 07:16:38.182677 kubelet[3358]: I0813 07:16:38.182631 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/28c2ded2-0e4b-4c3c-9002-7bb02e88ef38-node-certs\") pod \"calico-node-smzrt\" (UID: \"28c2ded2-0e4b-4c3c-9002-7bb02e88ef38\") " pod="calico-system/calico-node-smzrt" Aug 13 07:16:38.182729 kubelet[3358]: I0813 07:16:38.182686 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/28c2ded2-0e4b-4c3c-9002-7bb02e88ef38-cni-log-dir\") pod \"calico-node-smzrt\" (UID: \"28c2ded2-0e4b-4c3c-9002-7bb02e88ef38\") " pod="calico-system/calico-node-smzrt" Aug 13 07:16:38.182775 kubelet[3358]: I0813 07:16:38.182737 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28c2ded2-0e4b-4c3c-9002-7bb02e88ef38-xtables-lock\") pod \"calico-node-smzrt\" (UID: \"28c2ded2-0e4b-4c3c-9002-7bb02e88ef38\") " pod="calico-system/calico-node-smzrt" Aug 13 07:16:38.182775 kubelet[3358]: I0813 07:16:38.182763 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28c2ded2-0e4b-4c3c-9002-7bb02e88ef38-lib-modules\") pod \"calico-node-smzrt\" (UID: \"28c2ded2-0e4b-4c3c-9002-7bb02e88ef38\") " pod="calico-system/calico-node-smzrt" Aug 13 07:16:38.182863 kubelet[3358]: I0813 07:16:38.182817 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/28c2ded2-0e4b-4c3c-9002-7bb02e88ef38-var-lib-calico\") pod \"calico-node-smzrt\" (UID: \"28c2ded2-0e4b-4c3c-9002-7bb02e88ef38\") " pod="calico-system/calico-node-smzrt" Aug 13 07:16:38.182863 kubelet[3358]: I0813 07:16:38.182843 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnmxm\" (UniqueName: \"kubernetes.io/projected/28c2ded2-0e4b-4c3c-9002-7bb02e88ef38-kube-api-access-tnmxm\") pod \"calico-node-smzrt\" (UID: \"28c2ded2-0e4b-4c3c-9002-7bb02e88ef38\") " pod="calico-system/calico-node-smzrt" Aug 13 07:16:38.225805 containerd[1819]: time="2025-08-13T07:16:38.225665854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6796f9f49d-cjj2l,Uid:3364b54f-1f6a-475b-8532-e9424bb71a7a,Namespace:calico-system,Attempt:0,}" Aug 13 07:16:38.294367 containerd[1819]: time="2025-08-13T07:16:38.290285352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:16:38.294367 containerd[1819]: time="2025-08-13T07:16:38.290355553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:16:38.294367 containerd[1819]: time="2025-08-13T07:16:38.290372954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:38.294367 containerd[1819]: time="2025-08-13T07:16:38.290487255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:38.309179 kubelet[3358]: E0813 07:16:38.306252 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.309179 kubelet[3358]: W0813 07:16:38.306283 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.309179 kubelet[3358]: E0813 07:16:38.306316 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.313167 kubelet[3358]: E0813 07:16:38.312295 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.313167 kubelet[3358]: W0813 07:16:38.312333 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.313167 kubelet[3358]: E0813 07:16:38.312360 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.365058 containerd[1819]: time="2025-08-13T07:16:38.365007675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6796f9f49d-cjj2l,Uid:3364b54f-1f6a-475b-8532-e9424bb71a7a,Namespace:calico-system,Attempt:0,} returns sandbox id \"0be87045e0c15066ba57de3062eb1d2da3f5e998458eb4976c56fd453c735a7e\"" Aug 13 07:16:38.367101 containerd[1819]: time="2025-08-13T07:16:38.366953700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 07:16:38.420714 kubelet[3358]: E0813 07:16:38.420358 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-spflk" podUID="7b98a07a-1303-4fe2-9cdb-fec495674a1b" Aug 13 07:16:38.434174 containerd[1819]: time="2025-08-13T07:16:38.434106629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-smzrt,Uid:28c2ded2-0e4b-4c3c-9002-7bb02e88ef38,Namespace:calico-system,Attempt:0,}" Aug 13 07:16:38.470615 kubelet[3358]: E0813 07:16:38.470476 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.470615 kubelet[3358]: W0813 07:16:38.470502 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.470615 kubelet[3358]: E0813 07:16:38.470533 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.473258 kubelet[3358]: E0813 07:16:38.473223 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.473258 kubelet[3358]: W0813 07:16:38.473256 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.473422 kubelet[3358]: E0813 07:16:38.473293 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.474253 kubelet[3358]: E0813 07:16:38.473611 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.474253 kubelet[3358]: W0813 07:16:38.473630 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.474253 kubelet[3358]: E0813 07:16:38.473650 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.474253 kubelet[3358]: E0813 07:16:38.473906 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.474253 kubelet[3358]: W0813 07:16:38.473920 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.474253 kubelet[3358]: E0813 07:16:38.473936 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.476101 kubelet[3358]: E0813 07:16:38.474491 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.476101 kubelet[3358]: W0813 07:16:38.474508 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.476101 kubelet[3358]: E0813 07:16:38.474564 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.476101 kubelet[3358]: E0813 07:16:38.474796 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.476101 kubelet[3358]: W0813 07:16:38.474809 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.476101 kubelet[3358]: E0813 07:16:38.474826 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.476101 kubelet[3358]: E0813 07:16:38.475231 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.476101 kubelet[3358]: W0813 07:16:38.475244 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.476101 kubelet[3358]: E0813 07:16:38.475259 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.477876 kubelet[3358]: E0813 07:16:38.476475 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.477876 kubelet[3358]: W0813 07:16:38.476489 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.477876 kubelet[3358]: E0813 07:16:38.476506 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.477876 kubelet[3358]: E0813 07:16:38.476733 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.477876 kubelet[3358]: W0813 07:16:38.476743 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.477876 kubelet[3358]: E0813 07:16:38.476756 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.477876 kubelet[3358]: E0813 07:16:38.476942 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.477876 kubelet[3358]: W0813 07:16:38.476952 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.477876 kubelet[3358]: E0813 07:16:38.476974 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.477876 kubelet[3358]: E0813 07:16:38.477193 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.480973 kubelet[3358]: W0813 07:16:38.477203 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.480973 kubelet[3358]: E0813 07:16:38.477216 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.480973 kubelet[3358]: E0813 07:16:38.477771 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.480973 kubelet[3358]: W0813 07:16:38.477827 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.480973 kubelet[3358]: E0813 07:16:38.477844 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.480973 kubelet[3358]: E0813 07:16:38.478467 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.480973 kubelet[3358]: W0813 07:16:38.478480 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.480973 kubelet[3358]: E0813 07:16:38.478495 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.480973 kubelet[3358]: E0813 07:16:38.479183 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.480973 kubelet[3358]: W0813 07:16:38.479199 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.482674 kubelet[3358]: E0813 07:16:38.479214 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.482674 kubelet[3358]: E0813 07:16:38.479747 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.482674 kubelet[3358]: W0813 07:16:38.479760 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.482674 kubelet[3358]: E0813 07:16:38.479773 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.482674 kubelet[3358]: E0813 07:16:38.481044 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.482674 kubelet[3358]: W0813 07:16:38.481057 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.482674 kubelet[3358]: E0813 07:16:38.481131 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.482674 kubelet[3358]: E0813 07:16:38.481416 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.482674 kubelet[3358]: W0813 07:16:38.481428 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.482674 kubelet[3358]: E0813 07:16:38.481443 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.483076 kubelet[3358]: E0813 07:16:38.481991 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.483076 kubelet[3358]: W0813 07:16:38.482003 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.483076 kubelet[3358]: E0813 07:16:38.482018 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.483076 kubelet[3358]: E0813 07:16:38.482604 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.483076 kubelet[3358]: W0813 07:16:38.482617 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.483076 kubelet[3358]: E0813 07:16:38.482632 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.483330 kubelet[3358]: E0813 07:16:38.483249 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.483330 kubelet[3358]: W0813 07:16:38.483262 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.483330 kubelet[3358]: E0813 07:16:38.483277 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.488214 kubelet[3358]: E0813 07:16:38.486008 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.488214 kubelet[3358]: W0813 07:16:38.486026 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.488214 kubelet[3358]: E0813 07:16:38.486043 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.488214 kubelet[3358]: I0813 07:16:38.486090 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7b98a07a-1303-4fe2-9cdb-fec495674a1b-varrun\") pod \"csi-node-driver-spflk\" (UID: \"7b98a07a-1303-4fe2-9cdb-fec495674a1b\") " pod="calico-system/csi-node-driver-spflk" Aug 13 07:16:38.488214 kubelet[3358]: E0813 07:16:38.486438 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.488214 kubelet[3358]: W0813 07:16:38.486453 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.488214 kubelet[3358]: E0813 07:16:38.486476 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.488214 kubelet[3358]: I0813 07:16:38.486505 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7b98a07a-1303-4fe2-9cdb-fec495674a1b-registration-dir\") pod \"csi-node-driver-spflk\" (UID: \"7b98a07a-1303-4fe2-9cdb-fec495674a1b\") " pod="calico-system/csi-node-driver-spflk" Aug 13 07:16:38.488214 kubelet[3358]: E0813 07:16:38.486772 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.488754 kubelet[3358]: W0813 07:16:38.486783 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.488754 kubelet[3358]: E0813 07:16:38.486808 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.488754 kubelet[3358]: E0813 07:16:38.487021 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.488754 kubelet[3358]: W0813 07:16:38.487031 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.488754 kubelet[3358]: E0813 07:16:38.487054 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.488754 kubelet[3358]: E0813 07:16:38.487309 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.488754 kubelet[3358]: W0813 07:16:38.487323 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.488754 kubelet[3358]: E0813 07:16:38.487347 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.488754 kubelet[3358]: I0813 07:16:38.487372 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb9m2\" (UniqueName: \"kubernetes.io/projected/7b98a07a-1303-4fe2-9cdb-fec495674a1b-kube-api-access-qb9m2\") pod \"csi-node-driver-spflk\" (UID: \"7b98a07a-1303-4fe2-9cdb-fec495674a1b\") " pod="calico-system/csi-node-driver-spflk" Aug 13 07:16:38.489135 kubelet[3358]: E0813 07:16:38.487609 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.489135 kubelet[3358]: W0813 07:16:38.487623 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.489135 kubelet[3358]: E0813 07:16:38.487638 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.489135 kubelet[3358]: E0813 07:16:38.487848 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.489135 kubelet[3358]: W0813 07:16:38.487858 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.489135 kubelet[3358]: E0813 07:16:38.487883 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.489135 kubelet[3358]: E0813 07:16:38.488113 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.489135 kubelet[3358]: W0813 07:16:38.488124 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.489135 kubelet[3358]: E0813 07:16:38.488141 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.489540 kubelet[3358]: I0813 07:16:38.488182 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b98a07a-1303-4fe2-9cdb-fec495674a1b-kubelet-dir\") pod \"csi-node-driver-spflk\" (UID: \"7b98a07a-1303-4fe2-9cdb-fec495674a1b\") " pod="calico-system/csi-node-driver-spflk" Aug 13 07:16:38.489540 kubelet[3358]: E0813 07:16:38.488431 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.489540 kubelet[3358]: W0813 07:16:38.488448 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.489540 kubelet[3358]: E0813 07:16:38.488474 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.489540 kubelet[3358]: I0813 07:16:38.488499 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7b98a07a-1303-4fe2-9cdb-fec495674a1b-socket-dir\") pod \"csi-node-driver-spflk\" (UID: \"7b98a07a-1303-4fe2-9cdb-fec495674a1b\") " pod="calico-system/csi-node-driver-spflk" Aug 13 07:16:38.489540 kubelet[3358]: E0813 07:16:38.488734 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.489540 kubelet[3358]: W0813 07:16:38.488746 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.489540 kubelet[3358]: E0813 07:16:38.488761 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.489540 kubelet[3358]: E0813 07:16:38.488971 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.489932 kubelet[3358]: W0813 07:16:38.488981 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.489932 kubelet[3358]: E0813 07:16:38.488997 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.489932 kubelet[3358]: E0813 07:16:38.489213 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.489932 kubelet[3358]: W0813 07:16:38.489224 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.489932 kubelet[3358]: E0813 07:16:38.489246 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.489932 kubelet[3358]: E0813 07:16:38.489460 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.489932 kubelet[3358]: W0813 07:16:38.489471 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.489932 kubelet[3358]: E0813 07:16:38.489494 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.489932 kubelet[3358]: E0813 07:16:38.489702 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.489932 kubelet[3358]: W0813 07:16:38.489717 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.490457 kubelet[3358]: E0813 07:16:38.489729 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.490457 kubelet[3358]: E0813 07:16:38.489905 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.490457 kubelet[3358]: W0813 07:16:38.489914 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.490457 kubelet[3358]: E0813 07:16:38.489924 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.505031 containerd[1819]: time="2025-08-13T07:16:38.504612600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:16:38.506644 containerd[1819]: time="2025-08-13T07:16:38.506455323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:16:38.506644 containerd[1819]: time="2025-08-13T07:16:38.506482923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:38.506644 containerd[1819]: time="2025-08-13T07:16:38.506597925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:38.559705 containerd[1819]: time="2025-08-13T07:16:38.559509578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-smzrt,Uid:28c2ded2-0e4b-4c3c-9002-7bb02e88ef38,Namespace:calico-system,Attempt:0,} returns sandbox id \"e94cb4bccd585efc87a76919be3c6b63384bad1fd7d4ea2bd6022fd2734b76e6\"" Aug 13 07:16:38.591637 kubelet[3358]: E0813 07:16:38.590496 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.591637 kubelet[3358]: W0813 07:16:38.590571 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.591637 kubelet[3358]: E0813 07:16:38.591555 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.592704 kubelet[3358]: E0813 07:16:38.592539 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.592704 kubelet[3358]: W0813 07:16:38.592565 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.592704 kubelet[3358]: E0813 07:16:38.592689 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.593821 kubelet[3358]: E0813 07:16:38.592994 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.593821 kubelet[3358]: W0813 07:16:38.593008 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.593821 kubelet[3358]: E0813 07:16:38.593029 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.593821 kubelet[3358]: E0813 07:16:38.593776 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.593821 kubelet[3358]: W0813 07:16:38.593791 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.593821 kubelet[3358]: E0813 07:16:38.593814 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.594743 kubelet[3358]: E0813 07:16:38.594399 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.594743 kubelet[3358]: W0813 07:16:38.594439 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.594743 kubelet[3358]: E0813 07:16:38.594518 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.595384 kubelet[3358]: E0813 07:16:38.595248 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.595384 kubelet[3358]: W0813 07:16:38.595278 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.595384 kubelet[3358]: E0813 07:16:38.595354 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.595918 kubelet[3358]: E0813 07:16:38.595768 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.595918 kubelet[3358]: W0813 07:16:38.595782 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.595918 kubelet[3358]: E0813 07:16:38.595850 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.596349 kubelet[3358]: E0813 07:16:38.596242 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.596349 kubelet[3358]: W0813 07:16:38.596257 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.596611 kubelet[3358]: E0813 07:16:38.596481 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.596803 kubelet[3358]: E0813 07:16:38.596728 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.596803 kubelet[3358]: W0813 07:16:38.596739 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.597101 kubelet[3358]: E0813 07:16:38.596985 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.597359 kubelet[3358]: E0813 07:16:38.597261 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.597359 kubelet[3358]: W0813 07:16:38.597272 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.597604 kubelet[3358]: E0813 07:16:38.597495 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.597815 kubelet[3358]: E0813 07:16:38.597719 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.597815 kubelet[3358]: W0813 07:16:38.597731 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.598092 kubelet[3358]: E0813 07:16:38.597986 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.598358 kubelet[3358]: E0813 07:16:38.598256 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.598358 kubelet[3358]: W0813 07:16:38.598268 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.598641 kubelet[3358]: E0813 07:16:38.598536 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.598858 kubelet[3358]: E0813 07:16:38.598763 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.598858 kubelet[3358]: W0813 07:16:38.598776 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.599113 kubelet[3358]: E0813 07:16:38.599003 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.599366 kubelet[3358]: E0813 07:16:38.599264 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.599366 kubelet[3358]: W0813 07:16:38.599277 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.599613 kubelet[3358]: E0813 07:16:38.599501 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.600048 kubelet[3358]: E0813 07:16:38.599915 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.600048 kubelet[3358]: W0813 07:16:38.599926 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.601554 kubelet[3358]: E0813 07:16:38.600342 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.601703 kubelet[3358]: E0813 07:16:38.601690 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.601848 kubelet[3358]: W0813 07:16:38.601781 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.601967 kubelet[3358]: E0813 07:16:38.601874 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.602384 kubelet[3358]: E0813 07:16:38.602270 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.602384 kubelet[3358]: W0813 07:16:38.602285 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.602384 kubelet[3358]: E0813 07:16:38.602341 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.602960 kubelet[3358]: E0813 07:16:38.602697 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.602960 kubelet[3358]: W0813 07:16:38.602710 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.602960 kubelet[3358]: E0813 07:16:38.602793 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.603375 kubelet[3358]: E0813 07:16:38.603305 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.603375 kubelet[3358]: W0813 07:16:38.603319 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.603615 kubelet[3358]: E0813 07:16:38.603402 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.603922 kubelet[3358]: E0813 07:16:38.603786 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.603922 kubelet[3358]: W0813 07:16:38.603799 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.603922 kubelet[3358]: E0813 07:16:38.603884 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.604460 kubelet[3358]: E0813 07:16:38.604323 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.604460 kubelet[3358]: W0813 07:16:38.604337 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.604848 kubelet[3358]: E0813 07:16:38.604583 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.605078 kubelet[3358]: E0813 07:16:38.605064 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.605301 kubelet[3358]: W0813 07:16:38.605181 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.605637 kubelet[3358]: E0813 07:16:38.605512 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.605637 kubelet[3358]: W0813 07:16:38.605526 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.605812 kubelet[3358]: E0813 07:16:38.605755 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.605812 kubelet[3358]: E0813 07:16:38.605794 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.606174 kubelet[3358]: E0813 07:16:38.606046 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.606174 kubelet[3358]: W0813 07:16:38.606059 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.607416 kubelet[3358]: E0813 07:16:38.606298 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.607693 kubelet[3358]: E0813 07:16:38.607632 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.607693 kubelet[3358]: W0813 07:16:38.607647 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.607693 kubelet[3358]: E0813 07:16:38.607663 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:38.617997 kubelet[3358]: E0813 07:16:38.617908 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:38.617997 kubelet[3358]: W0813 07:16:38.617933 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:38.617997 kubelet[3358]: E0813 07:16:38.617962 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:39.749020 kubelet[3358]: E0813 07:16:39.748537 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-spflk" podUID="7b98a07a-1303-4fe2-9cdb-fec495674a1b" Aug 13 07:16:40.005746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1624960519.mount: Deactivated successfully. Aug 13 07:16:41.426907 containerd[1819]: time="2025-08-13T07:16:41.426856860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:41.430259 containerd[1819]: time="2025-08-13T07:16:41.430180690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 07:16:41.433522 containerd[1819]: time="2025-08-13T07:16:41.433441520Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:41.438521 containerd[1819]: time="2025-08-13T07:16:41.438452466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:41.439171 containerd[1819]: time="2025-08-13T07:16:41.439021071Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 3.071872369s" Aug 13 07:16:41.439171 containerd[1819]: time="2025-08-13T07:16:41.439064271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 07:16:41.443105 containerd[1819]: time="2025-08-13T07:16:41.442789405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 07:16:41.452231 containerd[1819]: time="2025-08-13T07:16:41.452081689Z" level=info msg="CreateContainer within sandbox \"0be87045e0c15066ba57de3062eb1d2da3f5e998458eb4976c56fd453c735a7e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 07:16:41.487349 containerd[1819]: time="2025-08-13T07:16:41.487296909Z" level=info msg="CreateContainer within sandbox \"0be87045e0c15066ba57de3062eb1d2da3f5e998458eb4976c56fd453c735a7e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2f351983d56a71af60ee911ae702d8905ad391b94f681abd555799c57e1df18e\"" Aug 13 07:16:41.488987 containerd[1819]: time="2025-08-13T07:16:41.487934215Z" level=info msg="StartContainer for \"2f351983d56a71af60ee911ae702d8905ad391b94f681abd555799c57e1df18e\"" Aug 13 07:16:41.575576 containerd[1819]: time="2025-08-13T07:16:41.575237608Z" level=info msg="StartContainer for \"2f351983d56a71af60ee911ae702d8905ad391b94f681abd555799c57e1df18e\" returns successfully" Aug 13 07:16:41.748713 kubelet[3358]: E0813 07:16:41.748552 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-spflk" podUID="7b98a07a-1303-4fe2-9cdb-fec495674a1b" Aug 13 07:16:41.909260 kubelet[3358]: E0813 07:16:41.908580 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.909535 kubelet[3358]: W0813 07:16:41.909312 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.909535 kubelet[3358]: E0813 07:16:41.909348 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.910459 kubelet[3358]: E0813 07:16:41.910036 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.910459 kubelet[3358]: W0813 07:16:41.910065 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.910459 kubelet[3358]: E0813 07:16:41.910089 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.911215 kubelet[3358]: E0813 07:16:41.911071 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.911215 kubelet[3358]: W0813 07:16:41.911101 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.911215 kubelet[3358]: E0813 07:16:41.911121 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.911544 kubelet[3358]: E0813 07:16:41.911523 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.911544 kubelet[3358]: W0813 07:16:41.911539 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.911644 kubelet[3358]: E0813 07:16:41.911556 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.911907 kubelet[3358]: E0813 07:16:41.911810 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.911907 kubelet[3358]: W0813 07:16:41.911825 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.911907 kubelet[3358]: E0813 07:16:41.911841 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.912123 kubelet[3358]: E0813 07:16:41.912110 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.912123 kubelet[3358]: W0813 07:16:41.912122 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.912242 kubelet[3358]: E0813 07:16:41.912137 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.912392 kubelet[3358]: E0813 07:16:41.912379 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.912522 kubelet[3358]: W0813 07:16:41.912444 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.912522 kubelet[3358]: E0813 07:16:41.912458 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.912683 kubelet[3358]: E0813 07:16:41.912666 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.912683 kubelet[3358]: W0813 07:16:41.912681 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.912779 kubelet[3358]: E0813 07:16:41.912695 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.912936 kubelet[3358]: E0813 07:16:41.912922 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.912936 kubelet[3358]: W0813 07:16:41.912936 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.913234 kubelet[3358]: E0813 07:16:41.912949 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.913234 kubelet[3358]: E0813 07:16:41.913138 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.913234 kubelet[3358]: W0813 07:16:41.913171 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.913234 kubelet[3358]: E0813 07:16:41.913184 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.913489 kubelet[3358]: E0813 07:16:41.913396 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.913489 kubelet[3358]: W0813 07:16:41.913407 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.913489 kubelet[3358]: E0813 07:16:41.913421 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.913701 kubelet[3358]: E0813 07:16:41.913609 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.913701 kubelet[3358]: W0813 07:16:41.913619 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.913701 kubelet[3358]: E0813 07:16:41.913630 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.913986 kubelet[3358]: E0813 07:16:41.913890 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.913986 kubelet[3358]: W0813 07:16:41.913899 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.913986 kubelet[3358]: E0813 07:16:41.913915 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.914241 kubelet[3358]: E0813 07:16:41.914183 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.914241 kubelet[3358]: W0813 07:16:41.914194 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.914241 kubelet[3358]: E0813 07:16:41.914203 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.914578 kubelet[3358]: E0813 07:16:41.914501 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.914578 kubelet[3358]: W0813 07:16:41.914511 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.914578 kubelet[3358]: E0813 07:16:41.914521 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.920170 kubelet[3358]: E0813 07:16:41.919963 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.920170 kubelet[3358]: W0813 07:16:41.919987 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.920170 kubelet[3358]: E0813 07:16:41.920012 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.920754 kubelet[3358]: E0813 07:16:41.920647 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.920754 kubelet[3358]: W0813 07:16:41.920666 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.920754 kubelet[3358]: E0813 07:16:41.920698 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.921437 kubelet[3358]: E0813 07:16:41.921239 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.921437 kubelet[3358]: W0813 07:16:41.921255 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.921437 kubelet[3358]: E0813 07:16:41.921281 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.922297 kubelet[3358]: E0813 07:16:41.922191 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.922297 kubelet[3358]: W0813 07:16:41.922211 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.922297 kubelet[3358]: E0813 07:16:41.922240 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.922940 kubelet[3358]: E0813 07:16:41.922921 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.923121 kubelet[3358]: W0813 07:16:41.922940 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.923121 kubelet[3358]: E0813 07:16:41.923013 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.923557 kubelet[3358]: E0813 07:16:41.923408 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.923557 kubelet[3358]: W0813 07:16:41.923423 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.923557 kubelet[3358]: E0813 07:16:41.923525 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.923983 kubelet[3358]: E0813 07:16:41.923845 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.923983 kubelet[3358]: W0813 07:16:41.923864 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.923983 kubelet[3358]: E0813 07:16:41.923953 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.924479 kubelet[3358]: E0813 07:16:41.924326 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.924479 kubelet[3358]: W0813 07:16:41.924341 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.924479 kubelet[3358]: E0813 07:16:41.924358 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.924951 kubelet[3358]: E0813 07:16:41.924878 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.924951 kubelet[3358]: W0813 07:16:41.924892 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.925389 kubelet[3358]: E0813 07:16:41.925074 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.925642 kubelet[3358]: E0813 07:16:41.925628 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.925734 kubelet[3358]: W0813 07:16:41.925722 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.925950 kubelet[3358]: E0813 07:16:41.925866 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.926094 kubelet[3358]: E0813 07:16:41.926067 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.926094 kubelet[3358]: W0813 07:16:41.926079 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.926426 kubelet[3358]: E0813 07:16:41.926295 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.926557 kubelet[3358]: E0813 07:16:41.926547 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.926709 kubelet[3358]: W0813 07:16:41.926625 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.926709 kubelet[3358]: E0813 07:16:41.926655 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.927260 kubelet[3358]: E0813 07:16:41.927064 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.927260 kubelet[3358]: W0813 07:16:41.927078 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.927260 kubelet[3358]: E0813 07:16:41.927100 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.927955 kubelet[3358]: E0813 07:16:41.927615 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.927955 kubelet[3358]: W0813 07:16:41.927629 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.927955 kubelet[3358]: E0813 07:16:41.927738 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.928471 kubelet[3358]: E0813 07:16:41.928318 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.928471 kubelet[3358]: W0813 07:16:41.928333 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.928471 kubelet[3358]: E0813 07:16:41.928351 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.928825 kubelet[3358]: E0813 07:16:41.928795 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.928825 kubelet[3358]: W0813 07:16:41.928809 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.929089 kubelet[3358]: E0813 07:16:41.928953 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.930170 kubelet[3358]: E0813 07:16:41.929631 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.930170 kubelet[3358]: W0813 07:16:41.929648 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.930170 kubelet[3358]: E0813 07:16:41.929663 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:41.930849 kubelet[3358]: E0813 07:16:41.930808 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:41.931384 kubelet[3358]: W0813 07:16:41.931110 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:41.931384 kubelet[3358]: E0813 07:16:41.931134 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.836518 containerd[1819]: time="2025-08-13T07:16:42.836466666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:42.838580 containerd[1819]: time="2025-08-13T07:16:42.838418884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 07:16:42.841075 containerd[1819]: time="2025-08-13T07:16:42.840996707Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:42.844560 containerd[1819]: time="2025-08-13T07:16:42.844493739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:42.845270 containerd[1819]: time="2025-08-13T07:16:42.845084444Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.402252239s" Aug 13 07:16:42.845270 containerd[1819]: time="2025-08-13T07:16:42.845128545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 07:16:42.849234 containerd[1819]: time="2025-08-13T07:16:42.849069380Z" level=info msg="CreateContainer within sandbox \"e94cb4bccd585efc87a76919be3c6b63384bad1fd7d4ea2bd6022fd2734b76e6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 07:16:42.882820 kubelet[3358]: I0813 07:16:42.882790 3358 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:16:42.893945 containerd[1819]: time="2025-08-13T07:16:42.893811487Z" level=info msg="CreateContainer within sandbox \"e94cb4bccd585efc87a76919be3c6b63384bad1fd7d4ea2bd6022fd2734b76e6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c236611e944100d99f4ebc941a19d91e022f999b15e5a8ab4e173224c04cbf8d\"" Aug 13 07:16:42.896358 containerd[1819]: time="2025-08-13T07:16:42.895057598Z" level=info msg="StartContainer for \"c236611e944100d99f4ebc941a19d91e022f999b15e5a8ab4e173224c04cbf8d\"" Aug 13 07:16:42.924284 kubelet[3358]: E0813 07:16:42.922863 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.924284 kubelet[3358]: W0813 07:16:42.922893 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.924284 kubelet[3358]: E0813 07:16:42.922935 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.924533 kubelet[3358]: E0813 07:16:42.924454 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.924533 kubelet[3358]: W0813 07:16:42.924487 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.924533 kubelet[3358]: E0813 07:16:42.924513 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.927808 kubelet[3358]: E0813 07:16:42.924757 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.927808 kubelet[3358]: W0813 07:16:42.924772 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.927808 kubelet[3358]: E0813 07:16:42.924786 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.927808 kubelet[3358]: E0813 07:16:42.925015 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.927808 kubelet[3358]: W0813 07:16:42.925026 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.927808 kubelet[3358]: E0813 07:16:42.925052 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.927808 kubelet[3358]: E0813 07:16:42.925326 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.927808 kubelet[3358]: W0813 07:16:42.925338 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.927808 kubelet[3358]: E0813 07:16:42.925351 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.927808 kubelet[3358]: E0813 07:16:42.925581 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.931097 kubelet[3358]: W0813 07:16:42.925592 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.931097 kubelet[3358]: E0813 07:16:42.925605 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.931097 kubelet[3358]: E0813 07:16:42.925834 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.931097 kubelet[3358]: W0813 07:16:42.925846 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.931097 kubelet[3358]: E0813 07:16:42.925859 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.931097 kubelet[3358]: E0813 07:16:42.926057 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.931097 kubelet[3358]: W0813 07:16:42.926068 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.931097 kubelet[3358]: E0813 07:16:42.926079 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.931097 kubelet[3358]: E0813 07:16:42.926296 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.931097 kubelet[3358]: W0813 07:16:42.926307 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.938110 kubelet[3358]: E0813 07:16:42.926336 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.938110 kubelet[3358]: E0813 07:16:42.926674 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.938110 kubelet[3358]: W0813 07:16:42.926685 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.938110 kubelet[3358]: E0813 07:16:42.926697 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.938110 kubelet[3358]: E0813 07:16:42.926894 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.938110 kubelet[3358]: W0813 07:16:42.926903 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.938110 kubelet[3358]: E0813 07:16:42.926953 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.938110 kubelet[3358]: E0813 07:16:42.927356 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.938110 kubelet[3358]: W0813 07:16:42.927369 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.938110 kubelet[3358]: E0813 07:16:42.927383 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.938589 kubelet[3358]: E0813 07:16:42.927958 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.938589 kubelet[3358]: W0813 07:16:42.927970 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.938589 kubelet[3358]: E0813 07:16:42.927983 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.938589 kubelet[3358]: E0813 07:16:42.928251 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.938589 kubelet[3358]: W0813 07:16:42.928262 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.938589 kubelet[3358]: E0813 07:16:42.928274 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.938589 kubelet[3358]: E0813 07:16:42.928520 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.938589 kubelet[3358]: W0813 07:16:42.928551 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.938589 kubelet[3358]: E0813 07:16:42.928569 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.938589 kubelet[3358]: E0813 07:16:42.929301 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.938968 kubelet[3358]: W0813 07:16:42.929314 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.938968 kubelet[3358]: E0813 07:16:42.929329 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.938968 kubelet[3358]: E0813 07:16:42.929564 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.938968 kubelet[3358]: W0813 07:16:42.929576 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.938968 kubelet[3358]: E0813 07:16:42.929589 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.938968 kubelet[3358]: E0813 07:16:42.929793 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.938968 kubelet[3358]: W0813 07:16:42.929805 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.938968 kubelet[3358]: E0813 07:16:42.929818 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.938968 kubelet[3358]: E0813 07:16:42.934302 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.938968 kubelet[3358]: W0813 07:16:42.934320 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.944422 kubelet[3358]: E0813 07:16:42.934340 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.944422 kubelet[3358]: E0813 07:16:42.934670 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.944422 kubelet[3358]: W0813 07:16:42.934684 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.944422 kubelet[3358]: E0813 07:16:42.935556 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.944422 kubelet[3358]: E0813 07:16:42.939228 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.944422 kubelet[3358]: W0813 07:16:42.939244 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.944422 kubelet[3358]: E0813 07:16:42.939347 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.944422 kubelet[3358]: E0813 07:16:42.939503 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.944422 kubelet[3358]: W0813 07:16:42.939513 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.944422 kubelet[3358]: E0813 07:16:42.939594 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.944828 kubelet[3358]: E0813 07:16:42.939725 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.944828 kubelet[3358]: W0813 07:16:42.939735 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.944828 kubelet[3358]: E0813 07:16:42.939818 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.944828 kubelet[3358]: E0813 07:16:42.941220 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.944828 kubelet[3358]: W0813 07:16:42.941234 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.944828 kubelet[3358]: E0813 07:16:42.941265 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.944828 kubelet[3358]: E0813 07:16:42.941490 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.944828 kubelet[3358]: W0813 07:16:42.941501 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.946048 kubelet[3358]: E0813 07:16:42.945214 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.946048 kubelet[3358]: W0813 07:16:42.945228 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.948577 kubelet[3358]: E0813 07:16:42.948516 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.949005 kubelet[3358]: E0813 07:16:42.948988 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.949114 kubelet[3358]: W0813 07:16:42.949099 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.949212 kubelet[3358]: E0813 07:16:42.949198 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.949750 kubelet[3358]: E0813 07:16:42.949735 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.949913 kubelet[3358]: W0813 07:16:42.949898 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.950130 kubelet[3358]: E0813 07:16:42.949979 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.950462 kubelet[3358]: E0813 07:16:42.950289 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.950462 kubelet[3358]: W0813 07:16:42.950304 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.950462 kubelet[3358]: E0813 07:16:42.950318 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.952080 systemd[1]: run-containerd-runc-k8s.io-c236611e944100d99f4ebc941a19d91e022f999b15e5a8ab4e173224c04cbf8d-runc.qlVibH.mount: Deactivated successfully. Aug 13 07:16:42.970270 kubelet[3358]: E0813 07:16:42.965721 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.971730 kubelet[3358]: E0813 07:16:42.965984 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.971730 kubelet[3358]: W0813 07:16:42.970385 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.971730 kubelet[3358]: E0813 07:16:42.970429 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.971730 kubelet[3358]: I0813 07:16:42.971336 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6796f9f49d-cjj2l" podStartSLOduration=2.896231291 podStartE2EDuration="5.971317291s" podCreationTimestamp="2025-08-13 07:16:37 +0000 UTC" firstStartedPulling="2025-08-13 07:16:38.366433893 +0000 UTC m=+21.948937587" lastFinishedPulling="2025-08-13 07:16:41.441519793 +0000 UTC m=+25.024023587" observedRunningTime="2025-08-13 07:16:41.901406171 +0000 UTC m=+25.483909865" watchObservedRunningTime="2025-08-13 07:16:42.971317291 +0000 UTC m=+26.553821085" Aug 13 07:16:42.975367 kubelet[3358]: E0813 07:16:42.973452 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.975367 kubelet[3358]: W0813 07:16:42.973472 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.975367 kubelet[3358]: E0813 07:16:42.973497 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.975367 kubelet[3358]: E0813 07:16:42.973854 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.975367 kubelet[3358]: W0813 07:16:42.973868 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.975367 kubelet[3358]: E0813 07:16:42.973883 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:42.979181 kubelet[3358]: E0813 07:16:42.976864 3358 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:42.979181 kubelet[3358]: W0813 07:16:42.978397 3358 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:42.979181 kubelet[3358]: E0813 07:16:42.978432 3358 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:43.036681 containerd[1819]: time="2025-08-13T07:16:43.036616184Z" level=info msg="StartContainer for \"c236611e944100d99f4ebc941a19d91e022f999b15e5a8ab4e173224c04cbf8d\" returns successfully" Aug 13 07:16:43.449784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c236611e944100d99f4ebc941a19d91e022f999b15e5a8ab4e173224c04cbf8d-rootfs.mount: Deactivated successfully. Aug 13 07:16:43.748678 kubelet[3358]: E0813 07:16:43.748479 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-spflk" podUID="7b98a07a-1303-4fe2-9cdb-fec495674a1b" Aug 13 07:16:44.606219 containerd[1819]: time="2025-08-13T07:16:44.606109542Z" level=info msg="shim disconnected" id=c236611e944100d99f4ebc941a19d91e022f999b15e5a8ab4e173224c04cbf8d namespace=k8s.io Aug 13 07:16:44.606219 containerd[1819]: time="2025-08-13T07:16:44.606207043Z" level=warning msg="cleaning up after shim disconnected" id=c236611e944100d99f4ebc941a19d91e022f999b15e5a8ab4e173224c04cbf8d namespace=k8s.io Aug 13 07:16:44.606219 containerd[1819]: time="2025-08-13T07:16:44.606220543Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:16:44.893365 containerd[1819]: time="2025-08-13T07:16:44.892751846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 07:16:45.747970 kubelet[3358]: E0813 07:16:45.747901 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-spflk" podUID="7b98a07a-1303-4fe2-9cdb-fec495674a1b" Aug 13 07:16:47.749368 kubelet[3358]: E0813 07:16:47.748306 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-spflk" podUID="7b98a07a-1303-4fe2-9cdb-fec495674a1b" Aug 13 07:16:48.084506 containerd[1819]: time="2025-08-13T07:16:48.084443984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:48.097164 containerd[1819]: time="2025-08-13T07:16:48.097059948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 07:16:48.108609 containerd[1819]: time="2025-08-13T07:16:48.108500297Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:48.113558 containerd[1819]: time="2025-08-13T07:16:48.113479062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:48.114336 containerd[1819]: time="2025-08-13T07:16:48.114139870Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.221311623s" Aug 13 07:16:48.114336 containerd[1819]: time="2025-08-13T07:16:48.114202371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 07:16:48.123180 containerd[1819]: time="2025-08-13T07:16:48.122314777Z" level=info msg="CreateContainer within sandbox \"e94cb4bccd585efc87a76919be3c6b63384bad1fd7d4ea2bd6022fd2734b76e6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 07:16:48.175255 containerd[1819]: time="2025-08-13T07:16:48.175198264Z" level=info msg="CreateContainer within sandbox \"e94cb4bccd585efc87a76919be3c6b63384bad1fd7d4ea2bd6022fd2734b76e6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"663738cbcb9bf7970d6c733ade0cbb33bd4e62cbb825c6f26f28865624dfc2e7\"" Aug 13 07:16:48.176036 containerd[1819]: time="2025-08-13T07:16:48.175883073Z" level=info msg="StartContainer for \"663738cbcb9bf7970d6c733ade0cbb33bd4e62cbb825c6f26f28865624dfc2e7\"" Aug 13 07:16:48.213888 systemd[1]: run-containerd-runc-k8s.io-663738cbcb9bf7970d6c733ade0cbb33bd4e62cbb825c6f26f28865624dfc2e7-runc.2vj4Fp.mount: Deactivated successfully. Aug 13 07:16:48.248871 containerd[1819]: time="2025-08-13T07:16:48.248801122Z" level=info msg="StartContainer for \"663738cbcb9bf7970d6c733ade0cbb33bd4e62cbb825c6f26f28865624dfc2e7\" returns successfully" Aug 13 07:16:49.747867 kubelet[3358]: E0813 07:16:49.747500 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-spflk" podUID="7b98a07a-1303-4fe2-9cdb-fec495674a1b" Aug 13 07:16:49.927606 containerd[1819]: time="2025-08-13T07:16:49.927549259Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:16:49.955803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-663738cbcb9bf7970d6c733ade0cbb33bd4e62cbb825c6f26f28865624dfc2e7-rootfs.mount: Deactivated successfully. Aug 13 07:16:49.968267 kubelet[3358]: I0813 07:16:49.966990 3358 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 07:16:50.183883 kubelet[3358]: I0813 07:16:50.183827 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bprf8\" (UniqueName: \"kubernetes.io/projected/892537e6-36f9-4f24-929e-348869a4ec0b-kube-api-access-bprf8\") pod \"calico-apiserver-54888cfc75-zm9lk\" (UID: \"892537e6-36f9-4f24-929e-348869a4ec0b\") " pod="calico-apiserver/calico-apiserver-54888cfc75-zm9lk" Aug 13 07:16:50.183883 kubelet[3358]: I0813 07:16:50.183881 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvx9d\" (UniqueName: \"kubernetes.io/projected/78949a8e-334b-48f9-97e4-1c8e76cdaa5a-kube-api-access-fvx9d\") pod \"calico-apiserver-54888cfc75-2xbg6\" (UID: \"78949a8e-334b-48f9-97e4-1c8e76cdaa5a\") " pod="calico-apiserver/calico-apiserver-54888cfc75-2xbg6" Aug 13 07:16:50.184164 kubelet[3358]: I0813 07:16:50.183905 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0df5d788-89a4-43a2-8ef9-a2eb77539527-tigera-ca-bundle\") pod \"calico-kube-controllers-69594ffd6c-rdm87\" (UID: \"0df5d788-89a4-43a2-8ef9-a2eb77539527\") " pod="calico-system/calico-kube-controllers-69594ffd6c-rdm87" Aug 13 07:16:50.184164 kubelet[3358]: I0813 07:16:50.183933 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9853a32e-5298-4574-a340-3bc08468487e-whisker-ca-bundle\") pod \"whisker-788d4f8bd5-2tmfw\" (UID: \"9853a32e-5298-4574-a340-3bc08468487e\") " pod="calico-system/whisker-788d4f8bd5-2tmfw" Aug 13 07:16:50.184164 kubelet[3358]: I0813 07:16:50.183955 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-542rc\" (UniqueName: \"kubernetes.io/projected/9853a32e-5298-4574-a340-3bc08468487e-kube-api-access-542rc\") pod \"whisker-788d4f8bd5-2tmfw\" (UID: \"9853a32e-5298-4574-a340-3bc08468487e\") " pod="calico-system/whisker-788d4f8bd5-2tmfw" Aug 13 07:16:50.184164 kubelet[3358]: I0813 07:16:50.183974 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2d672f67-c714-47a4-acde-08f33595be3a-goldmane-key-pair\") pod \"goldmane-58fd7646b9-ppgrt\" (UID: \"2d672f67-c714-47a4-acde-08f33595be3a\") " pod="calico-system/goldmane-58fd7646b9-ppgrt" Aug 13 07:16:50.184164 kubelet[3358]: I0813 07:16:50.183996 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e73f852-6c7b-406b-995e-adedee6784ab-config-volume\") pod \"coredns-7c65d6cfc9-b4tm5\" (UID: \"8e73f852-6c7b-406b-995e-adedee6784ab\") " pod="kube-system/coredns-7c65d6cfc9-b4tm5" Aug 13 07:16:50.184385 kubelet[3358]: I0813 07:16:50.184016 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86478e9f-1c1b-4a14-b1bf-da6e5b798f3d-config-volume\") pod \"coredns-7c65d6cfc9-rqkdd\" (UID: \"86478e9f-1c1b-4a14-b1bf-da6e5b798f3d\") " pod="kube-system/coredns-7c65d6cfc9-rqkdd" Aug 13 07:16:50.184385 kubelet[3358]: I0813 07:16:50.184041 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4mzt\" (UniqueName: \"kubernetes.io/projected/0df5d788-89a4-43a2-8ef9-a2eb77539527-kube-api-access-c4mzt\") pod \"calico-kube-controllers-69594ffd6c-rdm87\" (UID: \"0df5d788-89a4-43a2-8ef9-a2eb77539527\") " pod="calico-system/calico-kube-controllers-69594ffd6c-rdm87" Aug 13 07:16:50.184385 kubelet[3358]: I0813 07:16:50.184070 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/78949a8e-334b-48f9-97e4-1c8e76cdaa5a-calico-apiserver-certs\") pod \"calico-apiserver-54888cfc75-2xbg6\" (UID: \"78949a8e-334b-48f9-97e4-1c8e76cdaa5a\") " pod="calico-apiserver/calico-apiserver-54888cfc75-2xbg6" Aug 13 07:16:50.184385 kubelet[3358]: I0813 07:16:50.184093 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d672f67-c714-47a4-acde-08f33595be3a-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-ppgrt\" (UID: \"2d672f67-c714-47a4-acde-08f33595be3a\") " pod="calico-system/goldmane-58fd7646b9-ppgrt" Aug 13 07:16:50.184385 kubelet[3358]: I0813 07:16:50.184118 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6pf7\" (UniqueName: \"kubernetes.io/projected/86478e9f-1c1b-4a14-b1bf-da6e5b798f3d-kube-api-access-h6pf7\") pod \"coredns-7c65d6cfc9-rqkdd\" (UID: \"86478e9f-1c1b-4a14-b1bf-da6e5b798f3d\") " pod="kube-system/coredns-7c65d6cfc9-rqkdd" Aug 13 07:16:50.184601 kubelet[3358]: I0813 07:16:50.184140 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2vvs\" (UniqueName: \"kubernetes.io/projected/2d672f67-c714-47a4-acde-08f33595be3a-kube-api-access-z2vvs\") pod \"goldmane-58fd7646b9-ppgrt\" (UID: \"2d672f67-c714-47a4-acde-08f33595be3a\") " pod="calico-system/goldmane-58fd7646b9-ppgrt" Aug 13 07:16:50.184601 kubelet[3358]: I0813 07:16:50.184192 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkswj\" (UniqueName: \"kubernetes.io/projected/8e73f852-6c7b-406b-995e-adedee6784ab-kube-api-access-jkswj\") pod \"coredns-7c65d6cfc9-b4tm5\" (UID: \"8e73f852-6c7b-406b-995e-adedee6784ab\") " pod="kube-system/coredns-7c65d6cfc9-b4tm5" Aug 13 07:16:50.184601 kubelet[3358]: I0813 07:16:50.184224 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9853a32e-5298-4574-a340-3bc08468487e-whisker-backend-key-pair\") pod \"whisker-788d4f8bd5-2tmfw\" (UID: \"9853a32e-5298-4574-a340-3bc08468487e\") " pod="calico-system/whisker-788d4f8bd5-2tmfw" Aug 13 07:16:50.184601 kubelet[3358]: I0813 07:16:50.184248 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d672f67-c714-47a4-acde-08f33595be3a-config\") pod \"goldmane-58fd7646b9-ppgrt\" (UID: \"2d672f67-c714-47a4-acde-08f33595be3a\") " pod="calico-system/goldmane-58fd7646b9-ppgrt" Aug 13 07:16:50.184601 kubelet[3358]: I0813 07:16:50.184273 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/892537e6-36f9-4f24-929e-348869a4ec0b-calico-apiserver-certs\") pod \"calico-apiserver-54888cfc75-zm9lk\" (UID: \"892537e6-36f9-4f24-929e-348869a4ec0b\") " pod="calico-apiserver/calico-apiserver-54888cfc75-zm9lk" Aug 13 07:16:51.199192 containerd[1819]: time="2025-08-13T07:16:51.198776352Z" level=info msg="shim disconnected" id=663738cbcb9bf7970d6c733ade0cbb33bd4e62cbb825c6f26f28865624dfc2e7 namespace=k8s.io Aug 13 07:16:51.199192 containerd[1819]: time="2025-08-13T07:16:51.198849453Z" level=warning msg="cleaning up after shim disconnected" id=663738cbcb9bf7970d6c733ade0cbb33bd4e62cbb825c6f26f28865624dfc2e7 namespace=k8s.io Aug 13 07:16:51.199192 containerd[1819]: time="2025-08-13T07:16:51.198862053Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:16:51.223141 containerd[1819]: time="2025-08-13T07:16:51.222617367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b4tm5,Uid:8e73f852-6c7b-406b-995e-adedee6784ab,Namespace:kube-system,Attempt:0,}" Aug 13 07:16:51.234516 containerd[1819]: time="2025-08-13T07:16:51.234467123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rqkdd,Uid:86478e9f-1c1b-4a14-b1bf-da6e5b798f3d,Namespace:kube-system,Attempt:0,}" Aug 13 07:16:51.237134 containerd[1819]: time="2025-08-13T07:16:51.237097858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54888cfc75-2xbg6,Uid:78949a8e-334b-48f9-97e4-1c8e76cdaa5a,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:16:51.238908 containerd[1819]: time="2025-08-13T07:16:51.238875482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-788d4f8bd5-2tmfw,Uid:9853a32e-5298-4574-a340-3bc08468487e,Namespace:calico-system,Attempt:0,}" Aug 13 07:16:51.251695 containerd[1819]: time="2025-08-13T07:16:51.251645750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69594ffd6c-rdm87,Uid:0df5d788-89a4-43a2-8ef9-a2eb77539527,Namespace:calico-system,Attempt:0,}" Aug 13 07:16:51.254071 containerd[1819]: time="2025-08-13T07:16:51.253708178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54888cfc75-zm9lk,Uid:892537e6-36f9-4f24-929e-348869a4ec0b,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:16:51.255680 containerd[1819]: time="2025-08-13T07:16:51.255645803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ppgrt,Uid:2d672f67-c714-47a4-acde-08f33595be3a,Namespace:calico-system,Attempt:0,}" Aug 13 07:16:51.436676 containerd[1819]: time="2025-08-13T07:16:51.436536093Z" level=error msg="Failed to destroy network for sandbox \"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.437371 containerd[1819]: time="2025-08-13T07:16:51.437165901Z" level=error msg="encountered an error cleaning up failed sandbox \"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.437371 containerd[1819]: time="2025-08-13T07:16:51.437248602Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b4tm5,Uid:8e73f852-6c7b-406b-995e-adedee6784ab,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.438470 kubelet[3358]: E0813 07:16:51.437701 3358 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.438470 kubelet[3358]: E0813 07:16:51.437807 3358 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-b4tm5" Aug 13 07:16:51.438470 kubelet[3358]: E0813 07:16:51.437840 3358 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-b4tm5" Aug 13 07:16:51.439389 kubelet[3358]: E0813 07:16:51.437903 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-b4tm5_kube-system(8e73f852-6c7b-406b-995e-adedee6784ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-b4tm5_kube-system(8e73f852-6c7b-406b-995e-adedee6784ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-b4tm5" podUID="8e73f852-6c7b-406b-995e-adedee6784ab" Aug 13 07:16:51.565262 containerd[1819]: time="2025-08-13T07:16:51.564130178Z" level=error msg="Failed to destroy network for sandbox \"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.565262 containerd[1819]: time="2025-08-13T07:16:51.564514784Z" level=error msg="encountered an error cleaning up failed sandbox \"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.565262 containerd[1819]: time="2025-08-13T07:16:51.564615485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-788d4f8bd5-2tmfw,Uid:9853a32e-5298-4574-a340-3bc08468487e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.566361 kubelet[3358]: E0813 07:16:51.564913 3358 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.566361 kubelet[3358]: E0813 07:16:51.565067 3358 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-788d4f8bd5-2tmfw" Aug 13 07:16:51.566361 kubelet[3358]: E0813 07:16:51.565101 3358 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-788d4f8bd5-2tmfw" Aug 13 07:16:51.566527 kubelet[3358]: E0813 07:16:51.565214 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-788d4f8bd5-2tmfw_calico-system(9853a32e-5298-4574-a340-3bc08468487e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-788d4f8bd5-2tmfw_calico-system(9853a32e-5298-4574-a340-3bc08468487e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-788d4f8bd5-2tmfw" podUID="9853a32e-5298-4574-a340-3bc08468487e" Aug 13 07:16:51.608889 containerd[1819]: time="2025-08-13T07:16:51.608563565Z" level=error msg="Failed to destroy network for sandbox \"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.609269 containerd[1819]: time="2025-08-13T07:16:51.609045372Z" level=error msg="encountered an error cleaning up failed sandbox \"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.609269 containerd[1819]: time="2025-08-13T07:16:51.609110073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69594ffd6c-rdm87,Uid:0df5d788-89a4-43a2-8ef9-a2eb77539527,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.609588 kubelet[3358]: E0813 07:16:51.609486 3358 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.609588 kubelet[3358]: E0813 07:16:51.609556 3358 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69594ffd6c-rdm87" Aug 13 07:16:51.609588 kubelet[3358]: E0813 07:16:51.609585 3358 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69594ffd6c-rdm87" Aug 13 07:16:51.610008 kubelet[3358]: E0813 07:16:51.609636 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69594ffd6c-rdm87_calico-system(0df5d788-89a4-43a2-8ef9-a2eb77539527)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69594ffd6c-rdm87_calico-system(0df5d788-89a4-43a2-8ef9-a2eb77539527)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69594ffd6c-rdm87" podUID="0df5d788-89a4-43a2-8ef9-a2eb77539527" Aug 13 07:16:51.645295 containerd[1819]: time="2025-08-13T07:16:51.645180149Z" level=error msg="Failed to destroy network for sandbox \"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.645722 containerd[1819]: time="2025-08-13T07:16:51.645570254Z" level=error msg="encountered an error cleaning up failed sandbox \"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.645722 containerd[1819]: time="2025-08-13T07:16:51.645635055Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rqkdd,Uid:86478e9f-1c1b-4a14-b1bf-da6e5b798f3d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.646526 kubelet[3358]: E0813 07:16:51.645877 3358 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.646526 kubelet[3358]: E0813 07:16:51.646381 3358 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rqkdd" Aug 13 07:16:51.646526 kubelet[3358]: E0813 07:16:51.646418 3358 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rqkdd" Aug 13 07:16:51.646835 kubelet[3358]: E0813 07:16:51.646721 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-rqkdd_kube-system(86478e9f-1c1b-4a14-b1bf-da6e5b798f3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-rqkdd_kube-system(86478e9f-1c1b-4a14-b1bf-da6e5b798f3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rqkdd" podUID="86478e9f-1c1b-4a14-b1bf-da6e5b798f3d" Aug 13 07:16:51.660228 containerd[1819]: time="2025-08-13T07:16:51.660071546Z" level=error msg="Failed to destroy network for sandbox \"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.660996 containerd[1819]: time="2025-08-13T07:16:51.660931357Z" level=error msg="encountered an error cleaning up failed sandbox \"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.661239 containerd[1819]: time="2025-08-13T07:16:51.661195461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54888cfc75-2xbg6,Uid:78949a8e-334b-48f9-97e4-1c8e76cdaa5a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.661777 kubelet[3358]: E0813 07:16:51.661711 3358 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.661980 kubelet[3358]: E0813 07:16:51.661780 3358 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54888cfc75-2xbg6" Aug 13 07:16:51.661980 kubelet[3358]: E0813 07:16:51.661805 3358 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54888cfc75-2xbg6" Aug 13 07:16:51.661980 kubelet[3358]: E0813 07:16:51.661860 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54888cfc75-2xbg6_calico-apiserver(78949a8e-334b-48f9-97e4-1c8e76cdaa5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54888cfc75-2xbg6_calico-apiserver(78949a8e-334b-48f9-97e4-1c8e76cdaa5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54888cfc75-2xbg6" podUID="78949a8e-334b-48f9-97e4-1c8e76cdaa5a" Aug 13 07:16:51.671678 containerd[1819]: time="2025-08-13T07:16:51.671609898Z" level=error msg="Failed to destroy network for sandbox \"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.672379 containerd[1819]: time="2025-08-13T07:16:51.672330908Z" level=error msg="encountered an error cleaning up failed sandbox \"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.672379 containerd[1819]: time="2025-08-13T07:16:51.672437209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54888cfc75-zm9lk,Uid:892537e6-36f9-4f24-929e-348869a4ec0b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.673461 kubelet[3358]: E0813 07:16:51.672991 3358 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.673461 kubelet[3358]: E0813 07:16:51.673068 3358 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54888cfc75-zm9lk" Aug 13 07:16:51.673461 kubelet[3358]: E0813 07:16:51.673092 3358 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54888cfc75-zm9lk" Aug 13 07:16:51.673894 kubelet[3358]: E0813 07:16:51.673157 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54888cfc75-zm9lk_calico-apiserver(892537e6-36f9-4f24-929e-348869a4ec0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54888cfc75-zm9lk_calico-apiserver(892537e6-36f9-4f24-929e-348869a4ec0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54888cfc75-zm9lk" podUID="892537e6-36f9-4f24-929e-348869a4ec0b" Aug 13 07:16:51.675399 containerd[1819]: time="2025-08-13T07:16:51.675362448Z" level=error msg="Failed to destroy network for sandbox \"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.675861 containerd[1819]: time="2025-08-13T07:16:51.675825654Z" level=error msg="encountered an error cleaning up failed sandbox \"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.675954 containerd[1819]: time="2025-08-13T07:16:51.675889555Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ppgrt,Uid:2d672f67-c714-47a4-acde-08f33595be3a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.676427 kubelet[3358]: E0813 07:16:51.676295 3358 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.676427 kubelet[3358]: E0813 07:16:51.676383 3358 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-ppgrt" Aug 13 07:16:51.676427 kubelet[3358]: E0813 07:16:51.676409 3358 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-ppgrt" Aug 13 07:16:51.676587 kubelet[3358]: E0813 07:16:51.676465 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-ppgrt_calico-system(2d672f67-c714-47a4-acde-08f33595be3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-ppgrt_calico-system(2d672f67-c714-47a4-acde-08f33595be3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-ppgrt" podUID="2d672f67-c714-47a4-acde-08f33595be3a" Aug 13 07:16:51.752103 containerd[1819]: time="2025-08-13T07:16:51.751452053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spflk,Uid:7b98a07a-1303-4fe2-9cdb-fec495674a1b,Namespace:calico-system,Attempt:0,}" Aug 13 07:16:51.834318 containerd[1819]: time="2025-08-13T07:16:51.834098445Z" level=error msg="Failed to destroy network for sandbox \"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.834626 containerd[1819]: time="2025-08-13T07:16:51.834571151Z" level=error msg="encountered an error cleaning up failed sandbox \"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.834725 containerd[1819]: time="2025-08-13T07:16:51.834649652Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spflk,Uid:7b98a07a-1303-4fe2-9cdb-fec495674a1b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.834931 kubelet[3358]: E0813 07:16:51.834892 3358 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:51.834931 kubelet[3358]: E0813 07:16:51.834952 3358 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spflk" Aug 13 07:16:51.835182 kubelet[3358]: E0813 07:16:51.835058 3358 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spflk" Aug 13 07:16:51.836010 kubelet[3358]: E0813 07:16:51.835302 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-spflk_calico-system(7b98a07a-1303-4fe2-9cdb-fec495674a1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-spflk_calico-system(7b98a07a-1303-4fe2-9cdb-fec495674a1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-spflk" podUID="7b98a07a-1303-4fe2-9cdb-fec495674a1b" Aug 13 07:16:51.911172 kubelet[3358]: I0813 07:16:51.911097 3358 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Aug 13 07:16:51.912997 containerd[1819]: time="2025-08-13T07:16:51.912465980Z" level=info msg="StopPodSandbox for \"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\"" Aug 13 07:16:51.912997 containerd[1819]: time="2025-08-13T07:16:51.912689183Z" level=info msg="Ensure that sandbox ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2 in task-service has been cleanup successfully" Aug 13 07:16:51.914645 kubelet[3358]: I0813 07:16:51.914435 3358 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Aug 13 07:16:51.915483 containerd[1819]: time="2025-08-13T07:16:51.915342518Z" level=info msg="StopPodSandbox for \"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\"" Aug 13 07:16:51.915597 containerd[1819]: time="2025-08-13T07:16:51.915547521Z" level=info msg="Ensure that sandbox 9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905 in task-service has been cleanup successfully" Aug 13 07:16:51.920517 kubelet[3358]: I0813 07:16:51.919715 3358 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Aug 13 07:16:51.921368 containerd[1819]: time="2025-08-13T07:16:51.921330597Z" level=info msg="StopPodSandbox for \"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\"" Aug 13 07:16:51.921581 containerd[1819]: time="2025-08-13T07:16:51.921542700Z" level=info msg="Ensure that sandbox c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b in task-service has been cleanup successfully" Aug 13 07:16:51.925233 kubelet[3358]: I0813 07:16:51.924427 3358 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Aug 13 07:16:51.927110 containerd[1819]: time="2025-08-13T07:16:51.926826570Z" level=info msg="StopPodSandbox for \"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\"" Aug 13 07:16:51.927110 containerd[1819]: time="2025-08-13T07:16:51.927036373Z" level=info msg="Ensure that sandbox 7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128 in task-service has been cleanup successfully" Aug 13 07:16:51.927692 kubelet[3358]: I0813 07:16:51.927056 3358 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Aug 13 07:16:51.931904 containerd[1819]: time="2025-08-13T07:16:51.931865636Z" level=info msg="StopPodSandbox for \"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\"" Aug 13 07:16:51.934442 containerd[1819]: time="2025-08-13T07:16:51.934409270Z" level=info msg="Ensure that sandbox 91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72 in task-service has been cleanup successfully" Aug 13 07:16:51.941361 kubelet[3358]: I0813 07:16:51.940925 3358 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Aug 13 07:16:51.944136 containerd[1819]: time="2025-08-13T07:16:51.943808094Z" level=info msg="StopPodSandbox for \"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\"" Aug 13 07:16:51.944136 containerd[1819]: time="2025-08-13T07:16:51.944030397Z" level=info msg="Ensure that sandbox fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95 in task-service has been cleanup successfully" Aug 13 07:16:51.949041 kubelet[3358]: I0813 07:16:51.949009 3358 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Aug 13 07:16:51.952226 containerd[1819]: time="2025-08-13T07:16:51.951998902Z" level=info msg="StopPodSandbox for \"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\"" Aug 13 07:16:51.956130 containerd[1819]: time="2025-08-13T07:16:51.956071756Z" level=info msg="Ensure that sandbox d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f in task-service has been cleanup successfully" Aug 13 07:16:51.968678 containerd[1819]: time="2025-08-13T07:16:51.968489420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 07:16:51.969413 kubelet[3358]: I0813 07:16:51.969365 3358 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Aug 13 07:16:51.972410 containerd[1819]: time="2025-08-13T07:16:51.971855965Z" level=info msg="StopPodSandbox for \"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\"" Aug 13 07:16:51.972410 containerd[1819]: time="2025-08-13T07:16:51.972086168Z" level=info msg="Ensure that sandbox 62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae in task-service has been cleanup successfully" Aug 13 07:16:52.108968 containerd[1819]: time="2025-08-13T07:16:52.108803674Z" level=error msg="StopPodSandbox for \"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\" failed" error="failed to destroy network for sandbox \"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:52.110774 kubelet[3358]: E0813 07:16:52.110735 3358 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Aug 13 07:16:52.111899 kubelet[3358]: E0813 07:16:52.111659 3358 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128"} Aug 13 07:16:52.111899 kubelet[3358]: E0813 07:16:52.111847 3358 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e73f852-6c7b-406b-995e-adedee6784ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:16:52.112198 kubelet[3358]: E0813 07:16:52.112108 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e73f852-6c7b-406b-995e-adedee6784ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-b4tm5" podUID="8e73f852-6c7b-406b-995e-adedee6784ab" Aug 13 07:16:52.146206 containerd[1819]: time="2025-08-13T07:16:52.146087566Z" level=error msg="StopPodSandbox for \"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\" failed" error="failed to destroy network for sandbox \"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:52.147213 kubelet[3358]: E0813 07:16:52.146714 3358 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Aug 13 07:16:52.147213 kubelet[3358]: E0813 07:16:52.146783 3358 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905"} Aug 13 07:16:52.147213 kubelet[3358]: E0813 07:16:52.146831 3358 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"86478e9f-1c1b-4a14-b1bf-da6e5b798f3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:16:52.147213 kubelet[3358]: E0813 07:16:52.146865 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"86478e9f-1c1b-4a14-b1bf-da6e5b798f3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rqkdd" podUID="86478e9f-1c1b-4a14-b1bf-da6e5b798f3d" Aug 13 07:16:52.151708 containerd[1819]: time="2025-08-13T07:16:52.151188234Z" level=error msg="StopPodSandbox for \"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\" failed" error="failed to destroy network for sandbox \"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:52.153504 kubelet[3358]: E0813 07:16:52.153215 3358 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Aug 13 07:16:52.153504 kubelet[3358]: E0813 07:16:52.153299 3358 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95"} Aug 13 07:16:52.153504 kubelet[3358]: E0813 07:16:52.153366 3358 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9853a32e-5298-4574-a340-3bc08468487e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:16:52.153504 kubelet[3358]: E0813 07:16:52.153402 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9853a32e-5298-4574-a340-3bc08468487e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-788d4f8bd5-2tmfw" podUID="9853a32e-5298-4574-a340-3bc08468487e" Aug 13 07:16:52.156648 containerd[1819]: time="2025-08-13T07:16:52.156592805Z" level=error msg="StopPodSandbox for \"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\" failed" error="failed to destroy network for sandbox \"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:52.157284 kubelet[3358]: E0813 07:16:52.157048 3358 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Aug 13 07:16:52.157284 kubelet[3358]: E0813 07:16:52.157121 3358 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f"} Aug 13 07:16:52.157284 kubelet[3358]: E0813 07:16:52.157185 3358 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0df5d788-89a4-43a2-8ef9-a2eb77539527\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:16:52.157284 kubelet[3358]: E0813 07:16:52.157217 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0df5d788-89a4-43a2-8ef9-a2eb77539527\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69594ffd6c-rdm87" podUID="0df5d788-89a4-43a2-8ef9-a2eb77539527" Aug 13 07:16:52.162455 containerd[1819]: time="2025-08-13T07:16:52.162395382Z" level=error msg="StopPodSandbox for \"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\" failed" error="failed to destroy network for sandbox \"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:52.163051 kubelet[3358]: E0813 07:16:52.162710 3358 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Aug 13 07:16:52.163051 kubelet[3358]: E0813 07:16:52.162785 3358 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2"} Aug 13 07:16:52.163051 kubelet[3358]: E0813 07:16:52.162832 3358 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"78949a8e-334b-48f9-97e4-1c8e76cdaa5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:16:52.163051 kubelet[3358]: E0813 07:16:52.162882 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"78949a8e-334b-48f9-97e4-1c8e76cdaa5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54888cfc75-2xbg6" podUID="78949a8e-334b-48f9-97e4-1c8e76cdaa5a" Aug 13 07:16:52.165259 containerd[1819]: time="2025-08-13T07:16:52.164421408Z" level=error msg="StopPodSandbox for \"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\" failed" error="failed to destroy network for sandbox \"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:52.165428 kubelet[3358]: E0813 07:16:52.164753 3358 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Aug 13 07:16:52.165428 kubelet[3358]: E0813 07:16:52.164803 3358 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72"} Aug 13 07:16:52.165428 kubelet[3358]: E0813 07:16:52.164840 3358 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"892537e6-36f9-4f24-929e-348869a4ec0b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:16:52.165428 kubelet[3358]: E0813 07:16:52.164873 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"892537e6-36f9-4f24-929e-348869a4ec0b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54888cfc75-zm9lk" podUID="892537e6-36f9-4f24-929e-348869a4ec0b" Aug 13 07:16:52.166670 containerd[1819]: time="2025-08-13T07:16:52.166601837Z" level=error msg="StopPodSandbox for \"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\" failed" error="failed to destroy network for sandbox \"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:52.167253 kubelet[3358]: E0813 07:16:52.166833 3358 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Aug 13 07:16:52.167253 kubelet[3358]: E0813 07:16:52.166896 3358 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b"} Aug 13 07:16:52.167253 kubelet[3358]: E0813 07:16:52.166934 3358 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2d672f67-c714-47a4-acde-08f33595be3a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:16:52.167253 kubelet[3358]: E0813 07:16:52.166962 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2d672f67-c714-47a4-acde-08f33595be3a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-ppgrt" podUID="2d672f67-c714-47a4-acde-08f33595be3a" Aug 13 07:16:52.177662 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128-shm.mount: Deactivated successfully. Aug 13 07:16:52.180138 containerd[1819]: time="2025-08-13T07:16:52.180047515Z" level=error msg="StopPodSandbox for \"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\" failed" error="failed to destroy network for sandbox \"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:52.180654 kubelet[3358]: E0813 07:16:52.180431 3358 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Aug 13 07:16:52.180654 kubelet[3358]: E0813 07:16:52.180499 3358 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae"} Aug 13 07:16:52.180654 kubelet[3358]: E0813 07:16:52.180548 3358 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b98a07a-1303-4fe2-9cdb-fec495674a1b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:16:52.180654 kubelet[3358]: E0813 07:16:52.180583 3358 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b98a07a-1303-4fe2-9cdb-fec495674a1b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-spflk" podUID="7b98a07a-1303-4fe2-9cdb-fec495674a1b" Aug 13 07:16:58.629043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2176141537.mount: Deactivated successfully. Aug 13 07:16:58.676830 containerd[1819]: time="2025-08-13T07:16:58.676769467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:58.683182 containerd[1819]: time="2025-08-13T07:16:58.683080249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 07:16:58.686815 containerd[1819]: time="2025-08-13T07:16:58.686733797Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:58.695514 containerd[1819]: time="2025-08-13T07:16:58.695431010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:58.696394 containerd[1819]: time="2025-08-13T07:16:58.696179119Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.727565697s" Aug 13 07:16:58.696394 containerd[1819]: time="2025-08-13T07:16:58.696241620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 07:16:58.714598 containerd[1819]: time="2025-08-13T07:16:58.714363756Z" level=info msg="CreateContainer within sandbox \"e94cb4bccd585efc87a76919be3c6b63384bad1fd7d4ea2bd6022fd2734b76e6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 07:16:58.758654 containerd[1819]: time="2025-08-13T07:16:58.758605631Z" level=info msg="CreateContainer within sandbox \"e94cb4bccd585efc87a76919be3c6b63384bad1fd7d4ea2bd6022fd2734b76e6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cdbf8a2ded046abc57255537f35ab585fbceff24fd5915bf4ffdc33f7d2b7f78\"" Aug 13 07:16:58.762048 containerd[1819]: time="2025-08-13T07:16:58.761806972Z" level=info msg="StartContainer for \"cdbf8a2ded046abc57255537f35ab585fbceff24fd5915bf4ffdc33f7d2b7f78\"" Aug 13 07:16:58.829888 containerd[1819]: time="2025-08-13T07:16:58.829591253Z" level=info msg="StartContainer for \"cdbf8a2ded046abc57255537f35ab585fbceff24fd5915bf4ffdc33f7d2b7f78\" returns successfully" Aug 13 07:16:59.020322 kubelet[3358]: I0813 07:16:59.020166 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-smzrt" podStartSLOduration=0.88694853 podStartE2EDuration="21.020127829s" podCreationTimestamp="2025-08-13 07:16:38 +0000 UTC" firstStartedPulling="2025-08-13 07:16:38.564023934 +0000 UTC m=+22.146527628" lastFinishedPulling="2025-08-13 07:16:58.697203233 +0000 UTC m=+42.279706927" observedRunningTime="2025-08-13 07:16:59.019639123 +0000 UTC m=+42.602142917" watchObservedRunningTime="2025-08-13 07:16:59.020127829 +0000 UTC m=+42.602631823" Aug 13 07:16:59.303021 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 07:16:59.303229 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 07:16:59.457035 containerd[1819]: time="2025-08-13T07:16:59.456970306Z" level=info msg="StopPodSandbox for \"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\"" Aug 13 07:16:59.595566 containerd[1819]: 2025-08-13 07:16:59.537 [INFO][4616] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Aug 13 07:16:59.595566 containerd[1819]: 2025-08-13 07:16:59.538 [INFO][4616] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" iface="eth0" netns="/var/run/netns/cni-09b3eb41-a156-e3ec-870b-b4e5a700d75b" Aug 13 07:16:59.595566 containerd[1819]: 2025-08-13 07:16:59.539 [INFO][4616] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" iface="eth0" netns="/var/run/netns/cni-09b3eb41-a156-e3ec-870b-b4e5a700d75b" Aug 13 07:16:59.595566 containerd[1819]: 2025-08-13 07:16:59.540 [INFO][4616] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" iface="eth0" netns="/var/run/netns/cni-09b3eb41-a156-e3ec-870b-b4e5a700d75b" Aug 13 07:16:59.595566 containerd[1819]: 2025-08-13 07:16:59.540 [INFO][4616] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Aug 13 07:16:59.595566 containerd[1819]: 2025-08-13 07:16:59.540 [INFO][4616] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Aug 13 07:16:59.595566 containerd[1819]: 2025-08-13 07:16:59.575 [INFO][4625] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" HandleID="k8s-pod-network.fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Workload="ci--4081.3.5--a--0c3b310332-k8s-whisker--788d4f8bd5--2tmfw-eth0" Aug 13 07:16:59.595566 containerd[1819]: 2025-08-13 07:16:59.576 [INFO][4625] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:16:59.595566 containerd[1819]: 2025-08-13 07:16:59.576 [INFO][4625] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:16:59.595566 containerd[1819]: 2025-08-13 07:16:59.585 [WARNING][4625] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" HandleID="k8s-pod-network.fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Workload="ci--4081.3.5--a--0c3b310332-k8s-whisker--788d4f8bd5--2tmfw-eth0" Aug 13 07:16:59.595566 containerd[1819]: 2025-08-13 07:16:59.585 [INFO][4625] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" HandleID="k8s-pod-network.fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Workload="ci--4081.3.5--a--0c3b310332-k8s-whisker--788d4f8bd5--2tmfw-eth0" Aug 13 07:16:59.595566 containerd[1819]: 2025-08-13 07:16:59.589 [INFO][4625] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:16:59.595566 containerd[1819]: 2025-08-13 07:16:59.593 [INFO][4616] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Aug 13 07:16:59.597902 containerd[1819]: time="2025-08-13T07:16:59.595796010Z" level=info msg="TearDown network for sandbox \"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\" successfully" Aug 13 07:16:59.597902 containerd[1819]: time="2025-08-13T07:16:59.595844011Z" level=info msg="StopPodSandbox for \"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\" returns successfully" Aug 13 07:16:59.627892 systemd[1]: run-netns-cni\x2d09b3eb41\x2da156\x2de3ec\x2d870b\x2db4e5a700d75b.mount: Deactivated successfully. Aug 13 07:16:59.652444 kubelet[3358]: I0813 07:16:59.651706 3358 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9853a32e-5298-4574-a340-3bc08468487e-whisker-ca-bundle\") pod \"9853a32e-5298-4574-a340-3bc08468487e\" (UID: \"9853a32e-5298-4574-a340-3bc08468487e\") " Aug 13 07:16:59.652444 kubelet[3358]: I0813 07:16:59.651771 3358 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-542rc\" (UniqueName: \"kubernetes.io/projected/9853a32e-5298-4574-a340-3bc08468487e-kube-api-access-542rc\") pod \"9853a32e-5298-4574-a340-3bc08468487e\" (UID: \"9853a32e-5298-4574-a340-3bc08468487e\") " Aug 13 07:16:59.652444 kubelet[3358]: I0813 07:16:59.651813 3358 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9853a32e-5298-4574-a340-3bc08468487e-whisker-backend-key-pair\") pod \"9853a32e-5298-4574-a340-3bc08468487e\" (UID: \"9853a32e-5298-4574-a340-3bc08468487e\") " Aug 13 07:16:59.653731 kubelet[3358]: I0813 07:16:59.653679 3358 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9853a32e-5298-4574-a340-3bc08468487e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9853a32e-5298-4574-a340-3bc08468487e" (UID: "9853a32e-5298-4574-a340-3bc08468487e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 07:16:59.659989 kubelet[3358]: I0813 07:16:59.659917 3358 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9853a32e-5298-4574-a340-3bc08468487e-kube-api-access-542rc" (OuterVolumeSpecName: "kube-api-access-542rc") pod "9853a32e-5298-4574-a340-3bc08468487e" (UID: "9853a32e-5298-4574-a340-3bc08468487e"). InnerVolumeSpecName "kube-api-access-542rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 07:16:59.661206 kubelet[3358]: I0813 07:16:59.661107 3358 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9853a32e-5298-4574-a340-3bc08468487e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9853a32e-5298-4574-a340-3bc08468487e" (UID: "9853a32e-5298-4574-a340-3bc08468487e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 07:16:59.663757 systemd[1]: var-lib-kubelet-pods-9853a32e\x2d5298\x2d4574\x2da340\x2d3bc08468487e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 07:16:59.670795 systemd[1]: var-lib-kubelet-pods-9853a32e\x2d5298\x2d4574\x2da340\x2d3bc08468487e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d542rc.mount: Deactivated successfully. Aug 13 07:16:59.752840 kubelet[3358]: I0813 07:16:59.752678 3358 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9853a32e-5298-4574-a340-3bc08468487e-whisker-backend-key-pair\") on node \"ci-4081.3.5-a-0c3b310332\" DevicePath \"\"" Aug 13 07:16:59.752840 kubelet[3358]: I0813 07:16:59.752721 3358 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9853a32e-5298-4574-a340-3bc08468487e-whisker-ca-bundle\") on node \"ci-4081.3.5-a-0c3b310332\" DevicePath \"\"" Aug 13 07:16:59.752840 kubelet[3358]: I0813 07:16:59.752737 3358 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-542rc\" (UniqueName: \"kubernetes.io/projected/9853a32e-5298-4574-a340-3bc08468487e-kube-api-access-542rc\") on node \"ci-4081.3.5-a-0c3b310332\" DevicePath \"\"" Aug 13 07:17:00.155900 kubelet[3358]: I0813 07:17:00.155830 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06dc60ab-bf01-4629-ac0b-ec802c41308e-whisker-ca-bundle\") pod \"whisker-74ff5d98bc-57jhv\" (UID: \"06dc60ab-bf01-4629-ac0b-ec802c41308e\") " pod="calico-system/whisker-74ff5d98bc-57jhv" Aug 13 07:17:00.156536 kubelet[3358]: I0813 07:17:00.155918 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fxxn\" (UniqueName: \"kubernetes.io/projected/06dc60ab-bf01-4629-ac0b-ec802c41308e-kube-api-access-2fxxn\") pod \"whisker-74ff5d98bc-57jhv\" (UID: \"06dc60ab-bf01-4629-ac0b-ec802c41308e\") " pod="calico-system/whisker-74ff5d98bc-57jhv" Aug 13 07:17:00.156536 kubelet[3358]: I0813 07:17:00.155995 3358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/06dc60ab-bf01-4629-ac0b-ec802c41308e-whisker-backend-key-pair\") pod \"whisker-74ff5d98bc-57jhv\" (UID: \"06dc60ab-bf01-4629-ac0b-ec802c41308e\") " pod="calico-system/whisker-74ff5d98bc-57jhv" Aug 13 07:17:00.399057 containerd[1819]: time="2025-08-13T07:17:00.399004549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74ff5d98bc-57jhv,Uid:06dc60ab-bf01-4629-ac0b-ec802c41308e,Namespace:calico-system,Attempt:0,}" Aug 13 07:17:00.582926 systemd-networkd[1388]: cali41c7b914d91: Link UP Aug 13 07:17:00.583838 systemd-networkd[1388]: cali41c7b914d91: Gained carrier Aug 13 07:17:00.601811 containerd[1819]: 2025-08-13 07:17:00.462 [INFO][4669] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:17:00.601811 containerd[1819]: 2025-08-13 07:17:00.473 [INFO][4669] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--0c3b310332-k8s-whisker--74ff5d98bc--57jhv-eth0 whisker-74ff5d98bc- calico-system 06dc60ab-bf01-4629-ac0b-ec802c41308e 932 0 2025-08-13 07:17:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:74ff5d98bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.5-a-0c3b310332 whisker-74ff5d98bc-57jhv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali41c7b914d91 [] [] }} ContainerID="e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1" Namespace="calico-system" Pod="whisker-74ff5d98bc-57jhv" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-whisker--74ff5d98bc--57jhv-" Aug 13 07:17:00.601811 containerd[1819]: 2025-08-13 07:17:00.473 [INFO][4669] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1" Namespace="calico-system" Pod="whisker-74ff5d98bc-57jhv" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-whisker--74ff5d98bc--57jhv-eth0" Aug 13 07:17:00.601811 containerd[1819]: 2025-08-13 07:17:00.500 [INFO][4681] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1" HandleID="k8s-pod-network.e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1" Workload="ci--4081.3.5--a--0c3b310332-k8s-whisker--74ff5d98bc--57jhv-eth0" Aug 13 07:17:00.601811 containerd[1819]: 2025-08-13 07:17:00.501 [INFO][4681] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1" HandleID="k8s-pod-network.e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1" Workload="ci--4081.3.5--a--0c3b310332-k8s-whisker--74ff5d98bc--57jhv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f210), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-0c3b310332", "pod":"whisker-74ff5d98bc-57jhv", "timestamp":"2025-08-13 07:17:00.500733871 +0000 UTC"}, Hostname:"ci-4081.3.5-a-0c3b310332", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:17:00.601811 containerd[1819]: 2025-08-13 07:17:00.501 [INFO][4681] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:00.601811 containerd[1819]: 2025-08-13 07:17:00.501 [INFO][4681] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:00.601811 containerd[1819]: 2025-08-13 07:17:00.501 [INFO][4681] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-0c3b310332' Aug 13 07:17:00.601811 containerd[1819]: 2025-08-13 07:17:00.508 [INFO][4681] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:00.601811 containerd[1819]: 2025-08-13 07:17:00.514 [INFO][4681] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:00.601811 containerd[1819]: 2025-08-13 07:17:00.520 [INFO][4681] ipam/ipam.go 511: Trying affinity for 192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:00.601811 containerd[1819]: 2025-08-13 07:17:00.522 [INFO][4681] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:00.601811 containerd[1819]: 2025-08-13 07:17:00.524 [INFO][4681] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:00.601811 containerd[1819]: 2025-08-13 07:17:00.524 [INFO][4681] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.22.64/26 handle="k8s-pod-network.e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:00.601811 containerd[1819]: 2025-08-13 07:17:00.526 [INFO][4681] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1 Aug 13 07:17:00.601811 containerd[1819]: 2025-08-13 07:17:00.533 [INFO][4681] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.22.64/26 handle="k8s-pod-network.e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:00.601811 containerd[1819]: 2025-08-13 07:17:00.541 [INFO][4681] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.22.65/26] block=192.168.22.64/26 handle="k8s-pod-network.e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:00.601811 containerd[1819]: 2025-08-13 07:17:00.541 [INFO][4681] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.65/26] handle="k8s-pod-network.e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:00.601811 containerd[1819]: 2025-08-13 07:17:00.542 [INFO][4681] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:00.601811 containerd[1819]: 2025-08-13 07:17:00.542 [INFO][4681] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.22.65/26] IPv6=[] ContainerID="e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1" HandleID="k8s-pod-network.e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1" Workload="ci--4081.3.5--a--0c3b310332-k8s-whisker--74ff5d98bc--57jhv-eth0" Aug 13 07:17:00.602811 containerd[1819]: 2025-08-13 07:17:00.543 [INFO][4669] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1" Namespace="calico-system" Pod="whisker-74ff5d98bc-57jhv" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-whisker--74ff5d98bc--57jhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-whisker--74ff5d98bc--57jhv-eth0", GenerateName:"whisker-74ff5d98bc-", Namespace:"calico-system", SelfLink:"", UID:"06dc60ab-bf01-4629-ac0b-ec802c41308e", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74ff5d98bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"", Pod:"whisker-74ff5d98bc-57jhv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.22.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali41c7b914d91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:00.602811 containerd[1819]: 2025-08-13 07:17:00.543 [INFO][4669] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.65/32] ContainerID="e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1" Namespace="calico-system" Pod="whisker-74ff5d98bc-57jhv" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-whisker--74ff5d98bc--57jhv-eth0" Aug 13 07:17:00.602811 containerd[1819]: 2025-08-13 07:17:00.543 [INFO][4669] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41c7b914d91 ContainerID="e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1" Namespace="calico-system" Pod="whisker-74ff5d98bc-57jhv" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-whisker--74ff5d98bc--57jhv-eth0" Aug 13 07:17:00.602811 containerd[1819]: 2025-08-13 07:17:00.581 [INFO][4669] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1" Namespace="calico-system" Pod="whisker-74ff5d98bc-57jhv" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-whisker--74ff5d98bc--57jhv-eth0" Aug 13 07:17:00.602811 containerd[1819]: 2025-08-13 07:17:00.582 [INFO][4669] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1" Namespace="calico-system" Pod="whisker-74ff5d98bc-57jhv" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-whisker--74ff5d98bc--57jhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-whisker--74ff5d98bc--57jhv-eth0", GenerateName:"whisker-74ff5d98bc-", Namespace:"calico-system", SelfLink:"", UID:"06dc60ab-bf01-4629-ac0b-ec802c41308e", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74ff5d98bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1", Pod:"whisker-74ff5d98bc-57jhv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.22.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali41c7b914d91", MAC:"96:d2:66:6d:8e:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:00.602811 containerd[1819]: 2025-08-13 07:17:00.600 [INFO][4669] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1" Namespace="calico-system" Pod="whisker-74ff5d98bc-57jhv" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-whisker--74ff5d98bc--57jhv-eth0" Aug 13 07:17:00.623694 containerd[1819]: time="2025-08-13T07:17:00.623540867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:00.623694 containerd[1819]: time="2025-08-13T07:17:00.623637568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:00.623694 containerd[1819]: time="2025-08-13T07:17:00.623660768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:00.624895 containerd[1819]: time="2025-08-13T07:17:00.623765970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:00.694472 containerd[1819]: time="2025-08-13T07:17:00.694425688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74ff5d98bc-57jhv,Uid:06dc60ab-bf01-4629-ac0b-ec802c41308e,Namespace:calico-system,Attempt:0,} returns sandbox id \"e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1\"" Aug 13 07:17:00.697302 containerd[1819]: time="2025-08-13T07:17:00.696346213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 07:17:00.750457 kubelet[3358]: I0813 07:17:00.750416 3358 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9853a32e-5298-4574-a340-3bc08468487e" path="/var/lib/kubelet/pods/9853a32e-5298-4574-a340-3bc08468487e/volumes" Aug 13 07:17:01.157172 kernel: bpftool[4837]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 07:17:01.533051 systemd-networkd[1388]: vxlan.calico: Link UP Aug 13 07:17:01.533063 systemd-networkd[1388]: vxlan.calico: Gained carrier Aug 13 07:17:02.189353 systemd-networkd[1388]: cali41c7b914d91: Gained IPv6LL Aug 13 07:17:02.192391 containerd[1819]: time="2025-08-13T07:17:02.192272753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:02.194974 containerd[1819]: time="2025-08-13T07:17:02.194911888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 07:17:02.198399 containerd[1819]: time="2025-08-13T07:17:02.198316632Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:02.204941 containerd[1819]: time="2025-08-13T07:17:02.204863017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:02.205781 containerd[1819]: time="2025-08-13T07:17:02.205732928Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.509341415s" Aug 13 07:17:02.205781 containerd[1819]: time="2025-08-13T07:17:02.205785529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 07:17:02.208429 containerd[1819]: time="2025-08-13T07:17:02.208266461Z" level=info msg="CreateContainer within sandbox \"e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 07:17:02.247378 containerd[1819]: time="2025-08-13T07:17:02.247325469Z" level=info msg="CreateContainer within sandbox \"e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"75518ec40bdb07624d72ce1b1fe39424a083e2ede4be4e0190cfa0275236fc15\"" Aug 13 07:17:02.249223 containerd[1819]: time="2025-08-13T07:17:02.247899676Z" level=info msg="StartContainer for \"75518ec40bdb07624d72ce1b1fe39424a083e2ede4be4e0190cfa0275236fc15\"" Aug 13 07:17:02.328854 containerd[1819]: time="2025-08-13T07:17:02.328798927Z" level=info msg="StartContainer for \"75518ec40bdb07624d72ce1b1fe39424a083e2ede4be4e0190cfa0275236fc15\" returns successfully" Aug 13 07:17:02.332377 containerd[1819]: time="2025-08-13T07:17:02.332333273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 07:17:03.469539 systemd-networkd[1388]: vxlan.calico: Gained IPv6LL Aug 13 07:17:03.751262 containerd[1819]: time="2025-08-13T07:17:03.749227787Z" level=info msg="StopPodSandbox for \"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\"" Aug 13 07:17:03.751262 containerd[1819]: time="2025-08-13T07:17:03.750479003Z" level=info msg="StopPodSandbox for \"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\"" Aug 13 07:17:03.884867 containerd[1819]: 2025-08-13 07:17:03.828 [INFO][4995] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Aug 13 07:17:03.884867 containerd[1819]: 2025-08-13 07:17:03.829 [INFO][4995] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" iface="eth0" netns="/var/run/netns/cni-199df6d2-9e7c-6362-9c43-f7213491a150" Aug 13 07:17:03.884867 containerd[1819]: 2025-08-13 07:17:03.829 [INFO][4995] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" iface="eth0" netns="/var/run/netns/cni-199df6d2-9e7c-6362-9c43-f7213491a150" Aug 13 07:17:03.884867 containerd[1819]: 2025-08-13 07:17:03.829 [INFO][4995] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" iface="eth0" netns="/var/run/netns/cni-199df6d2-9e7c-6362-9c43-f7213491a150" Aug 13 07:17:03.884867 containerd[1819]: 2025-08-13 07:17:03.829 [INFO][4995] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Aug 13 07:17:03.884867 containerd[1819]: 2025-08-13 07:17:03.829 [INFO][4995] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Aug 13 07:17:03.884867 containerd[1819]: 2025-08-13 07:17:03.870 [INFO][5006] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" HandleID="k8s-pod-network.7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0" Aug 13 07:17:03.884867 containerd[1819]: 2025-08-13 07:17:03.870 [INFO][5006] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:03.884867 containerd[1819]: 2025-08-13 07:17:03.870 [INFO][5006] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:03.884867 containerd[1819]: 2025-08-13 07:17:03.877 [WARNING][5006] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" HandleID="k8s-pod-network.7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0" Aug 13 07:17:03.884867 containerd[1819]: 2025-08-13 07:17:03.877 [INFO][5006] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" HandleID="k8s-pod-network.7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0" Aug 13 07:17:03.884867 containerd[1819]: 2025-08-13 07:17:03.879 [INFO][5006] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:03.884867 containerd[1819]: 2025-08-13 07:17:03.881 [INFO][4995] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Aug 13 07:17:03.886774 containerd[1819]: time="2025-08-13T07:17:03.885278255Z" level=info msg="TearDown network for sandbox \"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\" successfully" Aug 13 07:17:03.886774 containerd[1819]: time="2025-08-13T07:17:03.885318555Z" level=info msg="StopPodSandbox for \"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\" returns successfully" Aug 13 07:17:03.889656 containerd[1819]: time="2025-08-13T07:17:03.888709599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b4tm5,Uid:8e73f852-6c7b-406b-995e-adedee6784ab,Namespace:kube-system,Attempt:1,}" Aug 13 07:17:03.893823 systemd[1]: run-netns-cni\x2d199df6d2\x2d9e7c\x2d6362\x2d9c43\x2df7213491a150.mount: Deactivated successfully. Aug 13 07:17:03.897393 containerd[1819]: 2025-08-13 07:17:03.824 [INFO][4989] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Aug 13 07:17:03.897393 containerd[1819]: 2025-08-13 07:17:03.828 [INFO][4989] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" iface="eth0" netns="/var/run/netns/cni-74cc513a-48bc-4db8-4a37-a78d8a1527ad" Aug 13 07:17:03.897393 containerd[1819]: 2025-08-13 07:17:03.829 [INFO][4989] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" iface="eth0" netns="/var/run/netns/cni-74cc513a-48bc-4db8-4a37-a78d8a1527ad" Aug 13 07:17:03.897393 containerd[1819]: 2025-08-13 07:17:03.830 [INFO][4989] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" iface="eth0" netns="/var/run/netns/cni-74cc513a-48bc-4db8-4a37-a78d8a1527ad" Aug 13 07:17:03.897393 containerd[1819]: 2025-08-13 07:17:03.830 [INFO][4989] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Aug 13 07:17:03.897393 containerd[1819]: 2025-08-13 07:17:03.830 [INFO][4989] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Aug 13 07:17:03.897393 containerd[1819]: 2025-08-13 07:17:03.871 [INFO][5008] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" HandleID="k8s-pod-network.d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0" Aug 13 07:17:03.897393 containerd[1819]: 2025-08-13 07:17:03.872 [INFO][5008] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:03.897393 containerd[1819]: 2025-08-13 07:17:03.879 [INFO][5008] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:03.897393 containerd[1819]: 2025-08-13 07:17:03.886 [WARNING][5008] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" HandleID="k8s-pod-network.d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0" Aug 13 07:17:03.897393 containerd[1819]: 2025-08-13 07:17:03.886 [INFO][5008] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" HandleID="k8s-pod-network.d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0" Aug 13 07:17:03.897393 containerd[1819]: 2025-08-13 07:17:03.890 [INFO][5008] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:03.897393 containerd[1819]: 2025-08-13 07:17:03.894 [INFO][4989] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Aug 13 07:17:03.899173 containerd[1819]: time="2025-08-13T07:17:03.898551827Z" level=info msg="TearDown network for sandbox \"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\" successfully" Aug 13 07:17:03.899173 containerd[1819]: time="2025-08-13T07:17:03.898601828Z" level=info msg="StopPodSandbox for \"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\" returns successfully" Aug 13 07:17:03.902225 containerd[1819]: time="2025-08-13T07:17:03.900535653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69594ffd6c-rdm87,Uid:0df5d788-89a4-43a2-8ef9-a2eb77539527,Namespace:calico-system,Attempt:1,}" Aug 13 07:17:03.902702 systemd[1]: run-netns-cni\x2d74cc513a\x2d48bc\x2d4db8\x2d4a37\x2da78d8a1527ad.mount: Deactivated successfully. Aug 13 07:17:04.115644 systemd-networkd[1388]: cali30001e72ce6: Link UP Aug 13 07:17:04.116008 systemd-networkd[1388]: cali30001e72ce6: Gained carrier Aug 13 07:17:04.139170 containerd[1819]: 2025-08-13 07:17:04.010 [INFO][5029] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0 calico-kube-controllers-69594ffd6c- calico-system 0df5d788-89a4-43a2-8ef9-a2eb77539527 952 0 2025-08-13 07:16:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69594ffd6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.5-a-0c3b310332 calico-kube-controllers-69594ffd6c-rdm87 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali30001e72ce6 [] [] }} ContainerID="72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5" Namespace="calico-system" Pod="calico-kube-controllers-69594ffd6c-rdm87" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-" Aug 13 07:17:04.139170 containerd[1819]: 2025-08-13 07:17:04.010 [INFO][5029] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5" Namespace="calico-system" Pod="calico-kube-controllers-69594ffd6c-rdm87" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0" Aug 13 07:17:04.139170 containerd[1819]: 2025-08-13 07:17:04.052 [INFO][5042] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5" HandleID="k8s-pod-network.72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0" Aug 13 07:17:04.139170 containerd[1819]: 2025-08-13 07:17:04.052 [INFO][5042] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5" HandleID="k8s-pod-network.72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5950), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-0c3b310332", "pod":"calico-kube-controllers-69594ffd6c-rdm87", "timestamp":"2025-08-13 07:17:04.052094923 +0000 UTC"}, Hostname:"ci-4081.3.5-a-0c3b310332", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:17:04.139170 containerd[1819]: 2025-08-13 07:17:04.052 [INFO][5042] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:04.139170 containerd[1819]: 2025-08-13 07:17:04.052 [INFO][5042] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:04.139170 containerd[1819]: 2025-08-13 07:17:04.052 [INFO][5042] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-0c3b310332' Aug 13 07:17:04.139170 containerd[1819]: 2025-08-13 07:17:04.062 [INFO][5042] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:04.139170 containerd[1819]: 2025-08-13 07:17:04.068 [INFO][5042] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:04.139170 containerd[1819]: 2025-08-13 07:17:04.073 [INFO][5042] ipam/ipam.go 511: Trying affinity for 192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:04.139170 containerd[1819]: 2025-08-13 07:17:04.076 [INFO][5042] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:04.139170 containerd[1819]: 2025-08-13 07:17:04.079 [INFO][5042] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:04.139170 containerd[1819]: 2025-08-13 07:17:04.080 [INFO][5042] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.22.64/26 handle="k8s-pod-network.72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:04.139170 containerd[1819]: 2025-08-13 07:17:04.082 [INFO][5042] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5 Aug 13 07:17:04.139170 containerd[1819]: 2025-08-13 07:17:04.088 [INFO][5042] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.22.64/26 handle="k8s-pod-network.72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:04.139170 containerd[1819]: 2025-08-13 07:17:04.099 [INFO][5042] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.22.66/26] block=192.168.22.64/26 handle="k8s-pod-network.72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:04.139170 containerd[1819]: 2025-08-13 07:17:04.099 [INFO][5042] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.66/26] handle="k8s-pod-network.72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:04.139170 containerd[1819]: 2025-08-13 07:17:04.099 [INFO][5042] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:04.139170 containerd[1819]: 2025-08-13 07:17:04.099 [INFO][5042] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.22.66/26] IPv6=[] ContainerID="72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5" HandleID="k8s-pod-network.72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0" Aug 13 07:17:04.141510 containerd[1819]: 2025-08-13 07:17:04.102 [INFO][5029] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5" Namespace="calico-system" Pod="calico-kube-controllers-69594ffd6c-rdm87" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0", GenerateName:"calico-kube-controllers-69594ffd6c-", Namespace:"calico-system", SelfLink:"", UID:"0df5d788-89a4-43a2-8ef9-a2eb77539527", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69594ffd6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"", Pod:"calico-kube-controllers-69594ffd6c-rdm87", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali30001e72ce6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:04.141510 containerd[1819]: 2025-08-13 07:17:04.102 [INFO][5029] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.66/32] ContainerID="72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5" Namespace="calico-system" Pod="calico-kube-controllers-69594ffd6c-rdm87" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0" Aug 13 07:17:04.141510 containerd[1819]: 2025-08-13 07:17:04.102 [INFO][5029] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali30001e72ce6 ContainerID="72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5" Namespace="calico-system" Pod="calico-kube-controllers-69594ffd6c-rdm87" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0" Aug 13 07:17:04.141510 containerd[1819]: 2025-08-13 07:17:04.117 [INFO][5029] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5" Namespace="calico-system" Pod="calico-kube-controllers-69594ffd6c-rdm87" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0" Aug 13 07:17:04.141510 containerd[1819]: 2025-08-13 07:17:04.118 [INFO][5029] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5" Namespace="calico-system" Pod="calico-kube-controllers-69594ffd6c-rdm87" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0", GenerateName:"calico-kube-controllers-69594ffd6c-", Namespace:"calico-system", SelfLink:"", UID:"0df5d788-89a4-43a2-8ef9-a2eb77539527", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69594ffd6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5", Pod:"calico-kube-controllers-69594ffd6c-rdm87", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali30001e72ce6", MAC:"6a:77:ba:2f:ae:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:04.141510 containerd[1819]: 2025-08-13 07:17:04.136 [INFO][5029] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5" Namespace="calico-system" Pod="calico-kube-controllers-69594ffd6c-rdm87" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0" Aug 13 07:17:04.175776 containerd[1819]: time="2025-08-13T07:17:04.175122522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:04.175776 containerd[1819]: time="2025-08-13T07:17:04.175226923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:04.175776 containerd[1819]: time="2025-08-13T07:17:04.175249323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:04.175776 containerd[1819]: time="2025-08-13T07:17:04.175393625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:04.227442 systemd-networkd[1388]: calie722b9a935f: Link UP Aug 13 07:17:04.230236 systemd-networkd[1388]: calie722b9a935f: Gained carrier Aug 13 07:17:04.252927 containerd[1819]: 2025-08-13 07:17:04.014 [INFO][5019] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0 coredns-7c65d6cfc9- kube-system 8e73f852-6c7b-406b-995e-adedee6784ab 953 0 2025-08-13 07:16:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-a-0c3b310332 coredns-7c65d6cfc9-b4tm5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie722b9a935f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b4tm5" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-" Aug 13 07:17:04.252927 containerd[1819]: 2025-08-13 07:17:04.014 [INFO][5019] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b4tm5" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0" Aug 13 07:17:04.252927 containerd[1819]: 2025-08-13 07:17:04.058 [INFO][5047] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394" HandleID="k8s-pod-network.d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0" Aug 13 07:17:04.252927 containerd[1819]: 2025-08-13 07:17:04.058 [INFO][5047] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394" HandleID="k8s-pod-network.d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-a-0c3b310332", "pod":"coredns-7c65d6cfc9-b4tm5", "timestamp":"2025-08-13 07:17:04.0580183 +0000 UTC"}, Hostname:"ci-4081.3.5-a-0c3b310332", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:17:04.252927 containerd[1819]: 2025-08-13 07:17:04.058 [INFO][5047] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:04.252927 containerd[1819]: 2025-08-13 07:17:04.100 [INFO][5047] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:04.252927 containerd[1819]: 2025-08-13 07:17:04.100 [INFO][5047] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-0c3b310332' Aug 13 07:17:04.252927 containerd[1819]: 2025-08-13 07:17:04.163 [INFO][5047] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:04.252927 containerd[1819]: 2025-08-13 07:17:04.171 [INFO][5047] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:04.252927 containerd[1819]: 2025-08-13 07:17:04.180 [INFO][5047] ipam/ipam.go 511: Trying affinity for 192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:04.252927 containerd[1819]: 2025-08-13 07:17:04.184 [INFO][5047] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:04.252927 containerd[1819]: 2025-08-13 07:17:04.188 [INFO][5047] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:04.252927 containerd[1819]: 2025-08-13 07:17:04.188 [INFO][5047] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.22.64/26 handle="k8s-pod-network.d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:04.252927 containerd[1819]: 2025-08-13 07:17:04.194 [INFO][5047] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394 Aug 13 07:17:04.252927 containerd[1819]: 2025-08-13 07:17:04.204 [INFO][5047] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.22.64/26 handle="k8s-pod-network.d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:04.252927 containerd[1819]: 2025-08-13 07:17:04.215 [INFO][5047] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.22.67/26] block=192.168.22.64/26 handle="k8s-pod-network.d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:04.252927 containerd[1819]: 2025-08-13 07:17:04.215 [INFO][5047] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.67/26] handle="k8s-pod-network.d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:04.252927 containerd[1819]: 2025-08-13 07:17:04.215 [INFO][5047] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:04.252927 containerd[1819]: 2025-08-13 07:17:04.215 [INFO][5047] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.22.67/26] IPv6=[] ContainerID="d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394" HandleID="k8s-pod-network.d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0" Aug 13 07:17:04.254475 containerd[1819]: 2025-08-13 07:17:04.220 [INFO][5019] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b4tm5" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8e73f852-6c7b-406b-995e-adedee6784ab", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"", Pod:"coredns-7c65d6cfc9-b4tm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie722b9a935f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:04.254475 containerd[1819]: 2025-08-13 07:17:04.220 [INFO][5019] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.67/32] ContainerID="d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b4tm5" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0" Aug 13 07:17:04.254475 containerd[1819]: 2025-08-13 07:17:04.220 [INFO][5019] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie722b9a935f ContainerID="d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b4tm5" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0" Aug 13 07:17:04.254475 containerd[1819]: 2025-08-13 07:17:04.230 [INFO][5019] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b4tm5" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0" Aug 13 07:17:04.254475 containerd[1819]: 2025-08-13 07:17:04.232 [INFO][5019] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b4tm5" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8e73f852-6c7b-406b-995e-adedee6784ab", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394", Pod:"coredns-7c65d6cfc9-b4tm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie722b9a935f", MAC:"36:0b:94:ab:7c:7a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:04.254475 containerd[1819]: 2025-08-13 07:17:04.249 [INFO][5019] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b4tm5" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0" Aug 13 07:17:04.281430 containerd[1819]: time="2025-08-13T07:17:04.281384903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69594ffd6c-rdm87,Uid:0df5d788-89a4-43a2-8ef9-a2eb77539527,Namespace:calico-system,Attempt:1,} returns sandbox id \"72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5\"" Aug 13 07:17:04.306423 containerd[1819]: time="2025-08-13T07:17:04.306185525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:04.307290 containerd[1819]: time="2025-08-13T07:17:04.306889734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:04.307290 containerd[1819]: time="2025-08-13T07:17:04.306921234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:04.307290 containerd[1819]: time="2025-08-13T07:17:04.307047936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:04.364027 containerd[1819]: time="2025-08-13T07:17:04.363975276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b4tm5,Uid:8e73f852-6c7b-406b-995e-adedee6784ab,Namespace:kube-system,Attempt:1,} returns sandbox id \"d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394\"" Aug 13 07:17:04.368340 containerd[1819]: time="2025-08-13T07:17:04.367412521Z" level=info msg="CreateContainer within sandbox \"d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:17:04.412956 containerd[1819]: time="2025-08-13T07:17:04.412898812Z" level=info msg="CreateContainer within sandbox \"d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d8a122bc58d68f81ad996f5e7005b6349a582362b1523c4f755721157bbb834e\"" Aug 13 07:17:04.413583 containerd[1819]: time="2025-08-13T07:17:04.413546920Z" level=info msg="StartContainer for \"d8a122bc58d68f81ad996f5e7005b6349a582362b1523c4f755721157bbb834e\"" Aug 13 07:17:04.469240 containerd[1819]: time="2025-08-13T07:17:04.469141643Z" level=info msg="StartContainer for \"d8a122bc58d68f81ad996f5e7005b6349a582362b1523c4f755721157bbb834e\" returns successfully" Aug 13 07:17:04.749940 containerd[1819]: time="2025-08-13T07:17:04.749434085Z" level=info msg="StopPodSandbox for \"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\"" Aug 13 07:17:04.752235 containerd[1819]: time="2025-08-13T07:17:04.751248109Z" level=info msg="StopPodSandbox for \"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\"" Aug 13 07:17:05.019420 containerd[1819]: 2025-08-13 07:17:04.873 [INFO][5217] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Aug 13 07:17:05.019420 containerd[1819]: 2025-08-13 07:17:04.873 [INFO][5217] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" iface="eth0" netns="/var/run/netns/cni-3762069d-663d-0f3c-6bde-62e85d361157" Aug 13 07:17:05.019420 containerd[1819]: 2025-08-13 07:17:04.873 [INFO][5217] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" iface="eth0" netns="/var/run/netns/cni-3762069d-663d-0f3c-6bde-62e85d361157" Aug 13 07:17:05.019420 containerd[1819]: 2025-08-13 07:17:04.874 [INFO][5217] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" iface="eth0" netns="/var/run/netns/cni-3762069d-663d-0f3c-6bde-62e85d361157" Aug 13 07:17:05.019420 containerd[1819]: 2025-08-13 07:17:04.874 [INFO][5217] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Aug 13 07:17:05.019420 containerd[1819]: 2025-08-13 07:17:04.874 [INFO][5217] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Aug 13 07:17:05.019420 containerd[1819]: 2025-08-13 07:17:04.996 [INFO][5227] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" HandleID="k8s-pod-network.62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Workload="ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0" Aug 13 07:17:05.019420 containerd[1819]: 2025-08-13 07:17:04.996 [INFO][5227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:05.019420 containerd[1819]: 2025-08-13 07:17:04.997 [INFO][5227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:05.019420 containerd[1819]: 2025-08-13 07:17:05.007 [WARNING][5227] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" HandleID="k8s-pod-network.62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Workload="ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0" Aug 13 07:17:05.019420 containerd[1819]: 2025-08-13 07:17:05.007 [INFO][5227] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" HandleID="k8s-pod-network.62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Workload="ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0" Aug 13 07:17:05.019420 containerd[1819]: 2025-08-13 07:17:05.010 [INFO][5227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:05.019420 containerd[1819]: 2025-08-13 07:17:05.014 [INFO][5217] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Aug 13 07:17:05.026179 containerd[1819]: time="2025-08-13T07:17:05.025304084Z" level=info msg="TearDown network for sandbox \"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\" successfully" Aug 13 07:17:05.026179 containerd[1819]: time="2025-08-13T07:17:05.025351885Z" level=info msg="StopPodSandbox for \"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\" returns successfully" Aug 13 07:17:05.028576 containerd[1819]: time="2025-08-13T07:17:05.028531227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spflk,Uid:7b98a07a-1303-4fe2-9cdb-fec495674a1b,Namespace:calico-system,Attempt:1,}" Aug 13 07:17:05.031824 systemd[1]: run-netns-cni\x2d3762069d\x2d663d\x2d0f3c\x2d6bde\x2d62e85d361157.mount: Deactivated successfully. Aug 13 07:17:05.038939 containerd[1819]: 2025-08-13 07:17:04.894 [INFO][5210] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Aug 13 07:17:05.038939 containerd[1819]: 2025-08-13 07:17:04.896 [INFO][5210] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" iface="eth0" netns="/var/run/netns/cni-43627593-aa44-6da8-3a5e-50a5512638e6" Aug 13 07:17:05.038939 containerd[1819]: 2025-08-13 07:17:04.899 [INFO][5210] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" iface="eth0" netns="/var/run/netns/cni-43627593-aa44-6da8-3a5e-50a5512638e6" Aug 13 07:17:05.038939 containerd[1819]: 2025-08-13 07:17:04.900 [INFO][5210] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" iface="eth0" netns="/var/run/netns/cni-43627593-aa44-6da8-3a5e-50a5512638e6" Aug 13 07:17:05.038939 containerd[1819]: 2025-08-13 07:17:04.900 [INFO][5210] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Aug 13 07:17:05.038939 containerd[1819]: 2025-08-13 07:17:04.900 [INFO][5210] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Aug 13 07:17:05.038939 containerd[1819]: 2025-08-13 07:17:05.001 [INFO][5233] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" HandleID="k8s-pod-network.c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Workload="ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0" Aug 13 07:17:05.038939 containerd[1819]: 2025-08-13 07:17:05.001 [INFO][5233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:05.038939 containerd[1819]: 2025-08-13 07:17:05.010 [INFO][5233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:05.038939 containerd[1819]: 2025-08-13 07:17:05.021 [WARNING][5233] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" HandleID="k8s-pod-network.c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Workload="ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0" Aug 13 07:17:05.038939 containerd[1819]: 2025-08-13 07:17:05.021 [INFO][5233] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" HandleID="k8s-pod-network.c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Workload="ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0" Aug 13 07:17:05.038939 containerd[1819]: 2025-08-13 07:17:05.025 [INFO][5233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:05.038939 containerd[1819]: 2025-08-13 07:17:05.034 [INFO][5210] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Aug 13 07:17:05.039963 containerd[1819]: time="2025-08-13T07:17:05.039620473Z" level=info msg="TearDown network for sandbox \"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\" successfully" Aug 13 07:17:05.039963 containerd[1819]: time="2025-08-13T07:17:05.039654973Z" level=info msg="StopPodSandbox for \"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\" returns successfully" Aug 13 07:17:05.044713 containerd[1819]: time="2025-08-13T07:17:05.044673440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ppgrt,Uid:2d672f67-c714-47a4-acde-08f33595be3a,Namespace:calico-system,Attempt:1,}" Aug 13 07:17:05.045074 systemd[1]: run-netns-cni\x2d43627593\x2daa44\x2d6da8\x2d3a5e\x2d50a5512638e6.mount: Deactivated successfully. Aug 13 07:17:05.096446 kubelet[3358]: I0813 07:17:05.093831 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-b4tm5" podStartSLOduration=42.093804488 podStartE2EDuration="42.093804488s" podCreationTimestamp="2025-08-13 07:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:17:05.066542828 +0000 UTC m=+48.649046622" watchObservedRunningTime="2025-08-13 07:17:05.093804488 +0000 UTC m=+48.676308282" Aug 13 07:17:05.325466 systemd-networkd[1388]: cali30001e72ce6: Gained IPv6LL Aug 13 07:17:05.382393 systemd-networkd[1388]: cali1931d33429c: Link UP Aug 13 07:17:05.382714 systemd-networkd[1388]: cali1931d33429c: Gained carrier Aug 13 07:17:05.406732 containerd[1819]: 2025-08-13 07:17:05.267 [INFO][5243] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0 csi-node-driver- calico-system 7b98a07a-1303-4fe2-9cdb-fec495674a1b 969 0 2025-08-13 07:16:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.5-a-0c3b310332 csi-node-driver-spflk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1931d33429c [] [] }} ContainerID="e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21" Namespace="calico-system" Pod="csi-node-driver-spflk" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-" Aug 13 07:17:05.406732 containerd[1819]: 2025-08-13 07:17:05.268 [INFO][5243] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21" Namespace="calico-system" Pod="csi-node-driver-spflk" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0" Aug 13 07:17:05.406732 containerd[1819]: 2025-08-13 07:17:05.315 [INFO][5271] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21" HandleID="k8s-pod-network.e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21" Workload="ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0" Aug 13 07:17:05.406732 containerd[1819]: 2025-08-13 07:17:05.315 [INFO][5271] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21" HandleID="k8s-pod-network.e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21" Workload="ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-0c3b310332", "pod":"csi-node-driver-spflk", "timestamp":"2025-08-13 07:17:05.315553516 +0000 UTC"}, Hostname:"ci-4081.3.5-a-0c3b310332", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:17:05.406732 containerd[1819]: 2025-08-13 07:17:05.315 [INFO][5271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:05.406732 containerd[1819]: 2025-08-13 07:17:05.315 [INFO][5271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:05.406732 containerd[1819]: 2025-08-13 07:17:05.316 [INFO][5271] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-0c3b310332' Aug 13 07:17:05.406732 containerd[1819]: 2025-08-13 07:17:05.327 [INFO][5271] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:05.406732 containerd[1819]: 2025-08-13 07:17:05.334 [INFO][5271] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:05.406732 containerd[1819]: 2025-08-13 07:17:05.341 [INFO][5271] ipam/ipam.go 511: Trying affinity for 192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:05.406732 containerd[1819]: 2025-08-13 07:17:05.344 [INFO][5271] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:05.406732 containerd[1819]: 2025-08-13 07:17:05.347 [INFO][5271] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:05.406732 containerd[1819]: 2025-08-13 07:17:05.347 [INFO][5271] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.22.64/26 handle="k8s-pod-network.e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:05.406732 containerd[1819]: 2025-08-13 07:17:05.349 [INFO][5271] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21 Aug 13 07:17:05.406732 containerd[1819]: 2025-08-13 07:17:05.355 [INFO][5271] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.22.64/26 handle="k8s-pod-network.e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:05.406732 containerd[1819]: 2025-08-13 07:17:05.368 [INFO][5271] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.22.68/26] block=192.168.22.64/26 handle="k8s-pod-network.e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:05.406732 containerd[1819]: 2025-08-13 07:17:05.368 [INFO][5271] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.68/26] handle="k8s-pod-network.e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:05.406732 containerd[1819]: 2025-08-13 07:17:05.368 [INFO][5271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:05.406732 containerd[1819]: 2025-08-13 07:17:05.368 [INFO][5271] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.22.68/26] IPv6=[] ContainerID="e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21" HandleID="k8s-pod-network.e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21" Workload="ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0" Aug 13 07:17:05.410249 containerd[1819]: 2025-08-13 07:17:05.376 [INFO][5243] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21" Namespace="calico-system" Pod="csi-node-driver-spflk" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7b98a07a-1303-4fe2-9cdb-fec495674a1b", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"", Pod:"csi-node-driver-spflk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.22.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1931d33429c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:05.410249 containerd[1819]: 2025-08-13 07:17:05.377 [INFO][5243] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.68/32] ContainerID="e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21" Namespace="calico-system" Pod="csi-node-driver-spflk" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0" Aug 13 07:17:05.410249 containerd[1819]: 2025-08-13 07:17:05.377 [INFO][5243] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1931d33429c ContainerID="e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21" Namespace="calico-system" Pod="csi-node-driver-spflk" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0" Aug 13 07:17:05.410249 containerd[1819]: 2025-08-13 07:17:05.379 [INFO][5243] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21" Namespace="calico-system" Pod="csi-node-driver-spflk" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0" Aug 13 07:17:05.410249 containerd[1819]: 2025-08-13 07:17:05.379 [INFO][5243] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21" Namespace="calico-system" Pod="csi-node-driver-spflk" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7b98a07a-1303-4fe2-9cdb-fec495674a1b", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21", Pod:"csi-node-driver-spflk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.22.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1931d33429c", MAC:"8e:03:7e:46:96:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:05.410249 containerd[1819]: 2025-08-13 07:17:05.398 [INFO][5243] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21" Namespace="calico-system" Pod="csi-node-driver-spflk" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0" Aug 13 07:17:05.479113 systemd-networkd[1388]: cali6186c38648b: Link UP Aug 13 07:17:05.479424 systemd-networkd[1388]: cali6186c38648b: Gained carrier Aug 13 07:17:05.506197 containerd[1819]: 2025-08-13 07:17:05.272 [INFO][5253] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0 goldmane-58fd7646b9- calico-system 2d672f67-c714-47a4-acde-08f33595be3a 970 0 2025-08-13 07:16:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.5-a-0c3b310332 goldmane-58fd7646b9-ppgrt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6186c38648b [] [] }} ContainerID="9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985" Namespace="calico-system" Pod="goldmane-58fd7646b9-ppgrt" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-" Aug 13 07:17:05.506197 containerd[1819]: 2025-08-13 07:17:05.273 [INFO][5253] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985" Namespace="calico-system" Pod="goldmane-58fd7646b9-ppgrt" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0" Aug 13 07:17:05.506197 containerd[1819]: 2025-08-13 07:17:05.318 [INFO][5276] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985" HandleID="k8s-pod-network.9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985" Workload="ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0" Aug 13 07:17:05.506197 containerd[1819]: 2025-08-13 07:17:05.318 [INFO][5276] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985" HandleID="k8s-pod-network.9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985" Workload="ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5160), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-0c3b310332", "pod":"goldmane-58fd7646b9-ppgrt", "timestamp":"2025-08-13 07:17:05.318225251 +0000 UTC"}, Hostname:"ci-4081.3.5-a-0c3b310332", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:17:05.506197 containerd[1819]: 2025-08-13 07:17:05.318 [INFO][5276] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:05.506197 containerd[1819]: 2025-08-13 07:17:05.368 [INFO][5276] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:05.506197 containerd[1819]: 2025-08-13 07:17:05.368 [INFO][5276] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-0c3b310332' Aug 13 07:17:05.506197 containerd[1819]: 2025-08-13 07:17:05.428 [INFO][5276] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:05.506197 containerd[1819]: 2025-08-13 07:17:05.436 [INFO][5276] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:05.506197 containerd[1819]: 2025-08-13 07:17:05.442 [INFO][5276] ipam/ipam.go 511: Trying affinity for 192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:05.506197 containerd[1819]: 2025-08-13 07:17:05.445 [INFO][5276] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:05.506197 containerd[1819]: 2025-08-13 07:17:05.448 [INFO][5276] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:05.506197 containerd[1819]: 2025-08-13 07:17:05.448 [INFO][5276] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.22.64/26 handle="k8s-pod-network.9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:05.506197 containerd[1819]: 2025-08-13 07:17:05.450 [INFO][5276] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985 Aug 13 07:17:05.506197 containerd[1819]: 2025-08-13 07:17:05.459 [INFO][5276] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.22.64/26 handle="k8s-pod-network.9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:05.506197 containerd[1819]: 2025-08-13 07:17:05.472 [INFO][5276] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.22.69/26] block=192.168.22.64/26 handle="k8s-pod-network.9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:05.506197 containerd[1819]: 2025-08-13 07:17:05.472 [INFO][5276] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.69/26] handle="k8s-pod-network.9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:05.506197 containerd[1819]: 2025-08-13 07:17:05.472 [INFO][5276] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:05.506197 containerd[1819]: 2025-08-13 07:17:05.472 [INFO][5276] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.22.69/26] IPv6=[] ContainerID="9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985" HandleID="k8s-pod-network.9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985" Workload="ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0" Aug 13 07:17:05.511555 containerd[1819]: 2025-08-13 07:17:05.475 [INFO][5253] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985" Namespace="calico-system" Pod="goldmane-58fd7646b9-ppgrt" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"2d672f67-c714-47a4-acde-08f33595be3a", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"", Pod:"goldmane-58fd7646b9-ppgrt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.22.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6186c38648b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:05.511555 containerd[1819]: 2025-08-13 07:17:05.475 [INFO][5253] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.69/32] ContainerID="9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985" Namespace="calico-system" Pod="goldmane-58fd7646b9-ppgrt" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0" Aug 13 07:17:05.511555 containerd[1819]: 2025-08-13 07:17:05.475 [INFO][5253] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6186c38648b ContainerID="9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985" Namespace="calico-system" Pod="goldmane-58fd7646b9-ppgrt" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0" Aug 13 07:17:05.511555 containerd[1819]: 2025-08-13 07:17:05.478 [INFO][5253] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985" Namespace="calico-system" Pod="goldmane-58fd7646b9-ppgrt" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0" Aug 13 07:17:05.511555 containerd[1819]: 2025-08-13 07:17:05.480 [INFO][5253] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985" Namespace="calico-system" Pod="goldmane-58fd7646b9-ppgrt" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"2d672f67-c714-47a4-acde-08f33595be3a", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985", Pod:"goldmane-58fd7646b9-ppgrt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.22.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6186c38648b", MAC:"0e:93:d1:24:77:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:05.511555 containerd[1819]: 2025-08-13 07:17:05.501 [INFO][5253] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985" Namespace="calico-system" Pod="goldmane-58fd7646b9-ppgrt" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0" Aug 13 07:17:05.589034 containerd[1819]: time="2025-08-13T07:17:05.588227616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:05.589034 containerd[1819]: time="2025-08-13T07:17:05.588299717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:05.589034 containerd[1819]: time="2025-08-13T07:17:05.588322517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:05.589034 containerd[1819]: time="2025-08-13T07:17:05.588435819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:05.600432 containerd[1819]: time="2025-08-13T07:17:05.599907470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:05.600432 containerd[1819]: time="2025-08-13T07:17:05.599988571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:05.600432 containerd[1819]: time="2025-08-13T07:17:05.600011471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:05.601382 containerd[1819]: time="2025-08-13T07:17:05.601289088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:05.686914 containerd[1819]: time="2025-08-13T07:17:05.686761817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spflk,Uid:7b98a07a-1303-4fe2-9cdb-fec495674a1b,Namespace:calico-system,Attempt:1,} returns sandbox id \"e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21\"" Aug 13 07:17:05.715764 containerd[1819]: time="2025-08-13T07:17:05.715533697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ppgrt,Uid:2d672f67-c714-47a4-acde-08f33595be3a,Namespace:calico-system,Attempt:1,} returns sandbox id \"9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985\"" Aug 13 07:17:05.749481 containerd[1819]: time="2025-08-13T07:17:05.749436044Z" level=info msg="StopPodSandbox for \"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\"" Aug 13 07:17:05.753712 containerd[1819]: time="2025-08-13T07:17:05.753667400Z" level=info msg="StopPodSandbox for \"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\"" Aug 13 07:17:05.973129 containerd[1819]: 2025-08-13 07:17:05.873 [INFO][5408] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Aug 13 07:17:05.973129 containerd[1819]: 2025-08-13 07:17:05.874 [INFO][5408] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" iface="eth0" netns="/var/run/netns/cni-76b64c0a-579f-5ac9-4034-459a118fca3f" Aug 13 07:17:05.973129 containerd[1819]: 2025-08-13 07:17:05.874 [INFO][5408] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" iface="eth0" netns="/var/run/netns/cni-76b64c0a-579f-5ac9-4034-459a118fca3f" Aug 13 07:17:05.973129 containerd[1819]: 2025-08-13 07:17:05.875 [INFO][5408] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" iface="eth0" netns="/var/run/netns/cni-76b64c0a-579f-5ac9-4034-459a118fca3f" Aug 13 07:17:05.973129 containerd[1819]: 2025-08-13 07:17:05.875 [INFO][5408] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Aug 13 07:17:05.973129 containerd[1819]: 2025-08-13 07:17:05.875 [INFO][5408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Aug 13 07:17:05.973129 containerd[1819]: 2025-08-13 07:17:05.948 [INFO][5421] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" HandleID="k8s-pod-network.9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0" Aug 13 07:17:05.973129 containerd[1819]: 2025-08-13 07:17:05.948 [INFO][5421] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:05.973129 containerd[1819]: 2025-08-13 07:17:05.948 [INFO][5421] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:05.973129 containerd[1819]: 2025-08-13 07:17:05.957 [WARNING][5421] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" HandleID="k8s-pod-network.9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0" Aug 13 07:17:05.973129 containerd[1819]: 2025-08-13 07:17:05.957 [INFO][5421] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" HandleID="k8s-pod-network.9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0" Aug 13 07:17:05.973129 containerd[1819]: 2025-08-13 07:17:05.963 [INFO][5421] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:05.973129 containerd[1819]: 2025-08-13 07:17:05.969 [INFO][5408] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Aug 13 07:17:05.978348 containerd[1819]: time="2025-08-13T07:17:05.977272652Z" level=info msg="TearDown network for sandbox \"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\" successfully" Aug 13 07:17:05.978348 containerd[1819]: time="2025-08-13T07:17:05.977416354Z" level=info msg="StopPodSandbox for \"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\" returns successfully" Aug 13 07:17:05.988237 containerd[1819]: time="2025-08-13T07:17:05.986299171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rqkdd,Uid:86478e9f-1c1b-4a14-b1bf-da6e5b798f3d,Namespace:kube-system,Attempt:1,}" Aug 13 07:17:05.992059 systemd[1]: run-netns-cni\x2d76b64c0a\x2d579f\x2d5ac9\x2d4034\x2d459a118fca3f.mount: Deactivated successfully. Aug 13 07:17:06.023338 containerd[1819]: 2025-08-13 07:17:05.881 [INFO][5404] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Aug 13 07:17:06.023338 containerd[1819]: 2025-08-13 07:17:05.881 [INFO][5404] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" iface="eth0" netns="/var/run/netns/cni-621519b1-3e41-4120-4bb4-6b2c0f3926e0" Aug 13 07:17:06.023338 containerd[1819]: 2025-08-13 07:17:05.882 [INFO][5404] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" iface="eth0" netns="/var/run/netns/cni-621519b1-3e41-4120-4bb4-6b2c0f3926e0" Aug 13 07:17:06.023338 containerd[1819]: 2025-08-13 07:17:05.882 [INFO][5404] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" iface="eth0" netns="/var/run/netns/cni-621519b1-3e41-4120-4bb4-6b2c0f3926e0" Aug 13 07:17:06.023338 containerd[1819]: 2025-08-13 07:17:05.882 [INFO][5404] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Aug 13 07:17:06.023338 containerd[1819]: 2025-08-13 07:17:05.883 [INFO][5404] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Aug 13 07:17:06.023338 containerd[1819]: 2025-08-13 07:17:05.989 [INFO][5426] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" HandleID="k8s-pod-network.91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0" Aug 13 07:17:06.023338 containerd[1819]: 2025-08-13 07:17:05.993 [INFO][5426] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:06.023338 containerd[1819]: 2025-08-13 07:17:05.993 [INFO][5426] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:06.023338 containerd[1819]: 2025-08-13 07:17:06.007 [WARNING][5426] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" HandleID="k8s-pod-network.91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0" Aug 13 07:17:06.023338 containerd[1819]: 2025-08-13 07:17:06.009 [INFO][5426] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" HandleID="k8s-pod-network.91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0" Aug 13 07:17:06.023338 containerd[1819]: 2025-08-13 07:17:06.012 [INFO][5426] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:06.023338 containerd[1819]: 2025-08-13 07:17:06.019 [INFO][5404] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Aug 13 07:17:06.023953 containerd[1819]: time="2025-08-13T07:17:06.023520263Z" level=info msg="TearDown network for sandbox \"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\" successfully" Aug 13 07:17:06.023953 containerd[1819]: time="2025-08-13T07:17:06.023555563Z" level=info msg="StopPodSandbox for \"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\" returns successfully" Aug 13 07:17:06.028159 containerd[1819]: time="2025-08-13T07:17:06.026519202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54888cfc75-zm9lk,Uid:892537e6-36f9-4f24-929e-348869a4ec0b,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:17:06.031056 systemd-networkd[1388]: calie722b9a935f: Gained IPv6LL Aug 13 07:17:06.042632 systemd[1]: run-netns-cni\x2d621519b1\x2d3e41\x2d4120\x2d4bb4\x2d6b2c0f3926e0.mount: Deactivated successfully. Aug 13 07:17:06.365268 systemd-networkd[1388]: cali979770affa5: Link UP Aug 13 07:17:06.368732 systemd-networkd[1388]: cali979770affa5: Gained carrier Aug 13 07:17:06.401942 containerd[1819]: 2025-08-13 07:17:06.172 [INFO][5435] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0 coredns-7c65d6cfc9- kube-system 86478e9f-1c1b-4a14-b1bf-da6e5b798f3d 992 0 2025-08-13 07:16:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-a-0c3b310332 coredns-7c65d6cfc9-rqkdd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali979770affa5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rqkdd" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-" Aug 13 07:17:06.401942 containerd[1819]: 2025-08-13 07:17:06.173 [INFO][5435] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rqkdd" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0" Aug 13 07:17:06.401942 containerd[1819]: 2025-08-13 07:17:06.243 [INFO][5460] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0" HandleID="k8s-pod-network.8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0" Aug 13 07:17:06.401942 containerd[1819]: 2025-08-13 07:17:06.244 [INFO][5460] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0" HandleID="k8s-pod-network.8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-a-0c3b310332", "pod":"coredns-7c65d6cfc9-rqkdd", "timestamp":"2025-08-13 07:17:06.243892572 +0000 UTC"}, Hostname:"ci-4081.3.5-a-0c3b310332", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:17:06.401942 containerd[1819]: 2025-08-13 07:17:06.244 [INFO][5460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:06.401942 containerd[1819]: 2025-08-13 07:17:06.248 [INFO][5460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:06.401942 containerd[1819]: 2025-08-13 07:17:06.248 [INFO][5460] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-0c3b310332' Aug 13 07:17:06.401942 containerd[1819]: 2025-08-13 07:17:06.267 [INFO][5460] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:06.401942 containerd[1819]: 2025-08-13 07:17:06.274 [INFO][5460] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:06.401942 containerd[1819]: 2025-08-13 07:17:06.285 [INFO][5460] ipam/ipam.go 511: Trying affinity for 192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:06.401942 containerd[1819]: 2025-08-13 07:17:06.290 [INFO][5460] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:06.401942 containerd[1819]: 2025-08-13 07:17:06.302 [INFO][5460] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:06.401942 containerd[1819]: 2025-08-13 07:17:06.302 [INFO][5460] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.22.64/26 handle="k8s-pod-network.8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:06.401942 containerd[1819]: 2025-08-13 07:17:06.305 [INFO][5460] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0 Aug 13 07:17:06.401942 containerd[1819]: 2025-08-13 07:17:06.316 [INFO][5460] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.22.64/26 handle="k8s-pod-network.8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:06.401942 containerd[1819]: 2025-08-13 07:17:06.327 [INFO][5460] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.22.70/26] block=192.168.22.64/26 handle="k8s-pod-network.8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:06.401942 containerd[1819]: 2025-08-13 07:17:06.327 [INFO][5460] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.70/26] handle="k8s-pod-network.8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:06.401942 containerd[1819]: 2025-08-13 07:17:06.327 [INFO][5460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:06.401942 containerd[1819]: 2025-08-13 07:17:06.327 [INFO][5460] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.22.70/26] IPv6=[] ContainerID="8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0" HandleID="k8s-pod-network.8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0" Aug 13 07:17:06.404277 containerd[1819]: 2025-08-13 07:17:06.333 [INFO][5435] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rqkdd" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"86478e9f-1c1b-4a14-b1bf-da6e5b798f3d", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"", Pod:"coredns-7c65d6cfc9-rqkdd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali979770affa5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:06.404277 containerd[1819]: 2025-08-13 07:17:06.335 [INFO][5435] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.70/32] ContainerID="8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rqkdd" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0" Aug 13 07:17:06.404277 containerd[1819]: 2025-08-13 07:17:06.335 [INFO][5435] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali979770affa5 ContainerID="8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rqkdd" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0" Aug 13 07:17:06.404277 containerd[1819]: 2025-08-13 07:17:06.368 [INFO][5435] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rqkdd" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0" Aug 13 07:17:06.404277 containerd[1819]: 2025-08-13 07:17:06.371 [INFO][5435] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rqkdd" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"86478e9f-1c1b-4a14-b1bf-da6e5b798f3d", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0", Pod:"coredns-7c65d6cfc9-rqkdd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali979770affa5", MAC:"ba:4c:89:e7:06:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:06.404277 containerd[1819]: 2025-08-13 07:17:06.395 [INFO][5435] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rqkdd" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0" Aug 13 07:17:06.450818 systemd-networkd[1388]: cali7cc0d6085b9: Link UP Aug 13 07:17:06.451049 systemd-networkd[1388]: cali7cc0d6085b9: Gained carrier Aug 13 07:17:06.488189 containerd[1819]: time="2025-08-13T07:17:06.487825693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:06.488189 containerd[1819]: time="2025-08-13T07:17:06.487899094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:06.488189 containerd[1819]: time="2025-08-13T07:17:06.487916694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:06.488189 containerd[1819]: time="2025-08-13T07:17:06.488029896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:06.493963 containerd[1819]: 2025-08-13 07:17:06.235 [INFO][5444] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0 calico-apiserver-54888cfc75- calico-apiserver 892537e6-36f9-4f24-929e-348869a4ec0b 993 0 2025-08-13 07:16:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54888cfc75 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-a-0c3b310332 calico-apiserver-54888cfc75-zm9lk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7cc0d6085b9 [] [] }} ContainerID="2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250" Namespace="calico-apiserver" Pod="calico-apiserver-54888cfc75-zm9lk" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-" Aug 13 07:17:06.493963 containerd[1819]: 2025-08-13 07:17:06.236 [INFO][5444] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250" Namespace="calico-apiserver" Pod="calico-apiserver-54888cfc75-zm9lk" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0" Aug 13 07:17:06.493963 containerd[1819]: 2025-08-13 07:17:06.312 [INFO][5467] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250" HandleID="k8s-pod-network.2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0" Aug 13 07:17:06.493963 containerd[1819]: 2025-08-13 07:17:06.312 [INFO][5467] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250" HandleID="k8s-pod-network.2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7900), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-a-0c3b310332", "pod":"calico-apiserver-54888cfc75-zm9lk", "timestamp":"2025-08-13 07:17:06.312191574 +0000 UTC"}, Hostname:"ci-4081.3.5-a-0c3b310332", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:17:06.493963 containerd[1819]: 2025-08-13 07:17:06.312 [INFO][5467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:06.493963 containerd[1819]: 2025-08-13 07:17:06.327 [INFO][5467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:06.493963 containerd[1819]: 2025-08-13 07:17:06.328 [INFO][5467] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-0c3b310332' Aug 13 07:17:06.493963 containerd[1819]: 2025-08-13 07:17:06.367 [INFO][5467] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:06.493963 containerd[1819]: 2025-08-13 07:17:06.384 [INFO][5467] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:06.493963 containerd[1819]: 2025-08-13 07:17:06.400 [INFO][5467] ipam/ipam.go 511: Trying affinity for 192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:06.493963 containerd[1819]: 2025-08-13 07:17:06.406 [INFO][5467] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:06.493963 containerd[1819]: 2025-08-13 07:17:06.413 [INFO][5467] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:06.493963 containerd[1819]: 2025-08-13 07:17:06.413 [INFO][5467] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.22.64/26 handle="k8s-pod-network.2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:06.493963 containerd[1819]: 2025-08-13 07:17:06.418 [INFO][5467] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250 Aug 13 07:17:06.493963 containerd[1819]: 2025-08-13 07:17:06.425 [INFO][5467] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.22.64/26 handle="k8s-pod-network.2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:06.493963 containerd[1819]: 2025-08-13 07:17:06.438 [INFO][5467] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.22.71/26] block=192.168.22.64/26 handle="k8s-pod-network.2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:06.493963 containerd[1819]: 2025-08-13 07:17:06.438 [INFO][5467] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.71/26] handle="k8s-pod-network.2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:06.493963 containerd[1819]: 2025-08-13 07:17:06.438 [INFO][5467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:06.493963 containerd[1819]: 2025-08-13 07:17:06.438 [INFO][5467] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.22.71/26] IPv6=[] ContainerID="2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250" HandleID="k8s-pod-network.2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0" Aug 13 07:17:06.496627 containerd[1819]: 2025-08-13 07:17:06.446 [INFO][5444] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250" Namespace="calico-apiserver" Pod="calico-apiserver-54888cfc75-zm9lk" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0", GenerateName:"calico-apiserver-54888cfc75-", Namespace:"calico-apiserver", SelfLink:"", UID:"892537e6-36f9-4f24-929e-348869a4ec0b", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54888cfc75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"", Pod:"calico-apiserver-54888cfc75-zm9lk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cc0d6085b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:06.496627 containerd[1819]: 2025-08-13 07:17:06.446 [INFO][5444] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.71/32] ContainerID="2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250" Namespace="calico-apiserver" Pod="calico-apiserver-54888cfc75-zm9lk" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0" Aug 13 07:17:06.496627 containerd[1819]: 2025-08-13 07:17:06.446 [INFO][5444] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cc0d6085b9 ContainerID="2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250" Namespace="calico-apiserver" Pod="calico-apiserver-54888cfc75-zm9lk" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0" Aug 13 07:17:06.496627 containerd[1819]: 2025-08-13 07:17:06.449 [INFO][5444] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250" Namespace="calico-apiserver" Pod="calico-apiserver-54888cfc75-zm9lk" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0" Aug 13 07:17:06.496627 containerd[1819]: 2025-08-13 07:17:06.450 [INFO][5444] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250" Namespace="calico-apiserver" Pod="calico-apiserver-54888cfc75-zm9lk" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0", GenerateName:"calico-apiserver-54888cfc75-", Namespace:"calico-apiserver", SelfLink:"", UID:"892537e6-36f9-4f24-929e-348869a4ec0b", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54888cfc75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250", Pod:"calico-apiserver-54888cfc75-zm9lk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cc0d6085b9", MAC:"3e:66:13:bd:29:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:06.496627 containerd[1819]: 2025-08-13 07:17:06.485 [INFO][5444] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250" Namespace="calico-apiserver" Pod="calico-apiserver-54888cfc75-zm9lk" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0" Aug 13 07:17:06.572327 containerd[1819]: time="2025-08-13T07:17:06.562662081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:06.572327 containerd[1819]: time="2025-08-13T07:17:06.564670107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:06.572327 containerd[1819]: time="2025-08-13T07:17:06.564691808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:06.572327 containerd[1819]: time="2025-08-13T07:17:06.564843710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:06.606632 systemd-networkd[1388]: cali1931d33429c: Gained IPv6LL Aug 13 07:17:06.650015 containerd[1819]: time="2025-08-13T07:17:06.649888132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rqkdd,Uid:86478e9f-1c1b-4a14-b1bf-da6e5b798f3d,Namespace:kube-system,Attempt:1,} returns sandbox id \"8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0\"" Aug 13 07:17:06.665528 containerd[1819]: time="2025-08-13T07:17:06.665480638Z" level=info msg="CreateContainer within sandbox \"8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:17:06.676995 containerd[1819]: time="2025-08-13T07:17:06.676858889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54888cfc75-zm9lk,Uid:892537e6-36f9-4f24-929e-348869a4ec0b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250\"" Aug 13 07:17:06.736003 containerd[1819]: time="2025-08-13T07:17:06.735948769Z" level=info msg="CreateContainer within sandbox \"8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c29da9bc83e579f89439465f96f3a7a5f044a519af68d5dd3fcf7ee645ca1a0a\"" Aug 13 07:17:06.739123 containerd[1819]: time="2025-08-13T07:17:06.738369601Z" level=info msg="StartContainer for \"c29da9bc83e579f89439465f96f3a7a5f044a519af68d5dd3fcf7ee645ca1a0a\"" Aug 13 07:17:06.751959 containerd[1819]: time="2025-08-13T07:17:06.751916679Z" level=info msg="StopPodSandbox for \"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\"" Aug 13 07:17:06.927744 systemd-networkd[1388]: cali6186c38648b: Gained IPv6LL Aug 13 07:17:06.949923 containerd[1819]: time="2025-08-13T07:17:06.949857193Z" level=info msg="StartContainer for \"c29da9bc83e579f89439465f96f3a7a5f044a519af68d5dd3fcf7ee645ca1a0a\" returns successfully" Aug 13 07:17:07.081289 containerd[1819]: 2025-08-13 07:17:06.950 [INFO][5591] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Aug 13 07:17:07.081289 containerd[1819]: 2025-08-13 07:17:06.951 [INFO][5591] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" iface="eth0" netns="/var/run/netns/cni-15686352-63b9-96cf-f60d-5ada5a4b1d37" Aug 13 07:17:07.081289 containerd[1819]: 2025-08-13 07:17:06.952 [INFO][5591] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" iface="eth0" netns="/var/run/netns/cni-15686352-63b9-96cf-f60d-5ada5a4b1d37" Aug 13 07:17:07.081289 containerd[1819]: 2025-08-13 07:17:06.953 [INFO][5591] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" iface="eth0" netns="/var/run/netns/cni-15686352-63b9-96cf-f60d-5ada5a4b1d37" Aug 13 07:17:07.081289 containerd[1819]: 2025-08-13 07:17:06.953 [INFO][5591] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Aug 13 07:17:07.081289 containerd[1819]: 2025-08-13 07:17:06.953 [INFO][5591] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Aug 13 07:17:07.081289 containerd[1819]: 2025-08-13 07:17:07.047 [INFO][5625] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" HandleID="k8s-pod-network.ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0" Aug 13 07:17:07.081289 containerd[1819]: 2025-08-13 07:17:07.048 [INFO][5625] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:07.081289 containerd[1819]: 2025-08-13 07:17:07.048 [INFO][5625] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:07.081289 containerd[1819]: 2025-08-13 07:17:07.056 [WARNING][5625] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" HandleID="k8s-pod-network.ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0" Aug 13 07:17:07.081289 containerd[1819]: 2025-08-13 07:17:07.056 [INFO][5625] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" HandleID="k8s-pod-network.ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0" Aug 13 07:17:07.081289 containerd[1819]: 2025-08-13 07:17:07.058 [INFO][5625] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:07.081289 containerd[1819]: 2025-08-13 07:17:07.065 [INFO][5591] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Aug 13 07:17:07.081289 containerd[1819]: time="2025-08-13T07:17:07.074597240Z" level=info msg="TearDown network for sandbox \"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\" successfully" Aug 13 07:17:07.081289 containerd[1819]: time="2025-08-13T07:17:07.074633140Z" level=info msg="StopPodSandbox for \"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\" returns successfully" Aug 13 07:17:07.089428 containerd[1819]: time="2025-08-13T07:17:07.088685226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54888cfc75-2xbg6,Uid:78949a8e-334b-48f9-97e4-1c8e76cdaa5a,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:17:07.092909 systemd[1]: run-netns-cni\x2d15686352\x2d63b9\x2d96cf\x2df60d\x2d5ada5a4b1d37.mount: Deactivated successfully. Aug 13 07:17:07.142359 kubelet[3358]: I0813 07:17:07.138646 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-rqkdd" podStartSLOduration=45.138600285 podStartE2EDuration="45.138600285s" podCreationTimestamp="2025-08-13 07:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:17:07.138297181 +0000 UTC m=+50.720800975" watchObservedRunningTime="2025-08-13 07:17:07.138600285 +0000 UTC m=+50.721104079" Aug 13 07:17:07.324209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2115087001.mount: Deactivated successfully. Aug 13 07:17:07.372006 systemd-networkd[1388]: cali5c22137ada7: Link UP Aug 13 07:17:07.374070 systemd-networkd[1388]: cali5c22137ada7: Gained carrier Aug 13 07:17:07.394514 containerd[1819]: 2025-08-13 07:17:07.256 [INFO][5643] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0 calico-apiserver-54888cfc75- calico-apiserver 78949a8e-334b-48f9-97e4-1c8e76cdaa5a 1005 0 2025-08-13 07:16:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54888cfc75 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-a-0c3b310332 calico-apiserver-54888cfc75-2xbg6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5c22137ada7 [] [] }} ContainerID="e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc" Namespace="calico-apiserver" Pod="calico-apiserver-54888cfc75-2xbg6" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-" Aug 13 07:17:07.394514 containerd[1819]: 2025-08-13 07:17:07.257 [INFO][5643] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc" Namespace="calico-apiserver" Pod="calico-apiserver-54888cfc75-2xbg6" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0" Aug 13 07:17:07.394514 containerd[1819]: 2025-08-13 07:17:07.311 [INFO][5659] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc" HandleID="k8s-pod-network.e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0" Aug 13 07:17:07.394514 containerd[1819]: 2025-08-13 07:17:07.311 [INFO][5659] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc" HandleID="k8s-pod-network.e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332100), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-a-0c3b310332", "pod":"calico-apiserver-54888cfc75-2xbg6", "timestamp":"2025-08-13 07:17:07.311023861 +0000 UTC"}, Hostname:"ci-4081.3.5-a-0c3b310332", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:17:07.394514 containerd[1819]: 2025-08-13 07:17:07.311 [INFO][5659] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:07.394514 containerd[1819]: 2025-08-13 07:17:07.311 [INFO][5659] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:07.394514 containerd[1819]: 2025-08-13 07:17:07.311 [INFO][5659] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-0c3b310332' Aug 13 07:17:07.394514 containerd[1819]: 2025-08-13 07:17:07.325 [INFO][5659] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:07.394514 containerd[1819]: 2025-08-13 07:17:07.332 [INFO][5659] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:07.394514 containerd[1819]: 2025-08-13 07:17:07.338 [INFO][5659] ipam/ipam.go 511: Trying affinity for 192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:07.394514 containerd[1819]: 2025-08-13 07:17:07.340 [INFO][5659] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:07.394514 containerd[1819]: 2025-08-13 07:17:07.343 [INFO][5659] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.64/26 host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:07.394514 containerd[1819]: 2025-08-13 07:17:07.343 [INFO][5659] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.22.64/26 handle="k8s-pod-network.e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:07.394514 containerd[1819]: 2025-08-13 07:17:07.345 [INFO][5659] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc Aug 13 07:17:07.394514 containerd[1819]: 2025-08-13 07:17:07.350 [INFO][5659] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.22.64/26 handle="k8s-pod-network.e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:07.394514 containerd[1819]: 2025-08-13 07:17:07.360 [INFO][5659] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.22.72/26] block=192.168.22.64/26 handle="k8s-pod-network.e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:07.394514 containerd[1819]: 2025-08-13 07:17:07.361 [INFO][5659] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.72/26] handle="k8s-pod-network.e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc" host="ci-4081.3.5-a-0c3b310332" Aug 13 07:17:07.394514 containerd[1819]: 2025-08-13 07:17:07.362 [INFO][5659] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:07.394514 containerd[1819]: 2025-08-13 07:17:07.362 [INFO][5659] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.22.72/26] IPv6=[] ContainerID="e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc" HandleID="k8s-pod-network.e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0" Aug 13 07:17:07.396781 containerd[1819]: 2025-08-13 07:17:07.365 [INFO][5643] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc" Namespace="calico-apiserver" Pod="calico-apiserver-54888cfc75-2xbg6" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0", GenerateName:"calico-apiserver-54888cfc75-", Namespace:"calico-apiserver", SelfLink:"", UID:"78949a8e-334b-48f9-97e4-1c8e76cdaa5a", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54888cfc75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"", Pod:"calico-apiserver-54888cfc75-2xbg6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c22137ada7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:07.396781 containerd[1819]: 2025-08-13 07:17:07.365 [INFO][5643] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.72/32] ContainerID="e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc" Namespace="calico-apiserver" Pod="calico-apiserver-54888cfc75-2xbg6" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0" Aug 13 07:17:07.396781 containerd[1819]: 2025-08-13 07:17:07.365 [INFO][5643] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c22137ada7 ContainerID="e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc" Namespace="calico-apiserver" Pod="calico-apiserver-54888cfc75-2xbg6" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0" Aug 13 07:17:07.396781 containerd[1819]: 2025-08-13 07:17:07.372 [INFO][5643] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc" Namespace="calico-apiserver" Pod="calico-apiserver-54888cfc75-2xbg6" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0" Aug 13 07:17:07.396781 containerd[1819]: 2025-08-13 07:17:07.373 [INFO][5643] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc" Namespace="calico-apiserver" Pod="calico-apiserver-54888cfc75-2xbg6" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0", GenerateName:"calico-apiserver-54888cfc75-", Namespace:"calico-apiserver", SelfLink:"", UID:"78949a8e-334b-48f9-97e4-1c8e76cdaa5a", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54888cfc75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc", Pod:"calico-apiserver-54888cfc75-2xbg6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c22137ada7", MAC:"aa:88:86:de:2d:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:07.396781 containerd[1819]: 2025-08-13 07:17:07.391 [INFO][5643] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc" Namespace="calico-apiserver" Pod="calico-apiserver-54888cfc75-2xbg6" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0" Aug 13 07:17:07.405216 containerd[1819]: time="2025-08-13T07:17:07.405128003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 07:17:07.406787 containerd[1819]: time="2025-08-13T07:17:07.406754625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:07.413446 containerd[1819]: time="2025-08-13T07:17:07.413380412Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:07.414136 containerd[1819]: time="2025-08-13T07:17:07.413889719Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 5.081508445s" Aug 13 07:17:07.414136 containerd[1819]: time="2025-08-13T07:17:07.413935620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 07:17:07.414450 containerd[1819]: time="2025-08-13T07:17:07.414417526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:07.416756 containerd[1819]: time="2025-08-13T07:17:07.416727157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 07:17:07.419571 containerd[1819]: time="2025-08-13T07:17:07.419042087Z" level=info msg="CreateContainer within sandbox \"e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 07:17:07.439354 containerd[1819]: time="2025-08-13T07:17:07.439254854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:07.439683 containerd[1819]: time="2025-08-13T07:17:07.439540958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:07.439683 containerd[1819]: time="2025-08-13T07:17:07.439584458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:07.440112 containerd[1819]: time="2025-08-13T07:17:07.440042764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:07.473678 containerd[1819]: time="2025-08-13T07:17:07.473626608Z" level=info msg="CreateContainer within sandbox \"e3750b6e1c1a1a19308e863ddce6f108b0661cab002faf80279dfd97d2b892f1\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"08a06e904b1cefc1a1b53077c4f8280c7ce1725eb560fc76ecce9bea79dee36b\"" Aug 13 07:17:07.476077 containerd[1819]: time="2025-08-13T07:17:07.476025339Z" level=info msg="StartContainer for \"08a06e904b1cefc1a1b53077c4f8280c7ce1725eb560fc76ecce9bea79dee36b\"" Aug 13 07:17:07.519421 containerd[1819]: time="2025-08-13T07:17:07.519287011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54888cfc75-2xbg6,Uid:78949a8e-334b-48f9-97e4-1c8e76cdaa5a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc\"" Aug 13 07:17:07.570020 containerd[1819]: time="2025-08-13T07:17:07.569027467Z" level=info msg="StartContainer for \"08a06e904b1cefc1a1b53077c4f8280c7ce1725eb560fc76ecce9bea79dee36b\" returns successfully" Aug 13 07:17:08.077598 systemd-networkd[1388]: cali979770affa5: Gained IPv6LL Aug 13 07:17:08.078742 systemd-networkd[1388]: cali7cc0d6085b9: Gained IPv6LL Aug 13 07:17:08.136957 kubelet[3358]: I0813 07:17:08.136865 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-74ff5d98bc-57jhv" podStartSLOduration=1.416339019 podStartE2EDuration="8.136841064s" podCreationTimestamp="2025-08-13 07:17:00 +0000 UTC" firstStartedPulling="2025-08-13 07:17:00.695897107 +0000 UTC m=+44.278400801" lastFinishedPulling="2025-08-13 07:17:07.416399052 +0000 UTC m=+50.998902846" observedRunningTime="2025-08-13 07:17:08.13430253 +0000 UTC m=+51.716806324" watchObservedRunningTime="2025-08-13 07:17:08.136841064 +0000 UTC m=+51.719344858" Aug 13 07:17:09.229606 systemd-networkd[1388]: cali5c22137ada7: Gained IPv6LL Aug 13 07:17:10.990621 containerd[1819]: time="2025-08-13T07:17:10.990566040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:10.994792 containerd[1819]: time="2025-08-13T07:17:10.994705195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 07:17:11.001933 containerd[1819]: time="2025-08-13T07:17:11.001533885Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:11.009078 containerd[1819]: time="2025-08-13T07:17:11.009022984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:11.010188 containerd[1819]: time="2025-08-13T07:17:11.009782694Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.592799134s" Aug 13 07:17:11.010188 containerd[1819]: time="2025-08-13T07:17:11.009827894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 07:17:11.011635 containerd[1819]: time="2025-08-13T07:17:11.011530317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 07:17:11.031992 containerd[1819]: time="2025-08-13T07:17:11.031222677Z" level=info msg="CreateContainer within sandbox \"72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 07:17:11.068448 containerd[1819]: time="2025-08-13T07:17:11.068391568Z" level=info msg="CreateContainer within sandbox \"72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ece5d3865bfa40bfa75cbb52b47bb3d54ae101a27abc631bb2fb08620f7b0747\"" Aug 13 07:17:11.070747 containerd[1819]: time="2025-08-13T07:17:11.069295980Z" level=info msg="StartContainer for \"ece5d3865bfa40bfa75cbb52b47bb3d54ae101a27abc631bb2fb08620f7b0747\"" Aug 13 07:17:11.157133 containerd[1819]: time="2025-08-13T07:17:11.157046238Z" level=info msg="StartContainer for \"ece5d3865bfa40bfa75cbb52b47bb3d54ae101a27abc631bb2fb08620f7b0747\" returns successfully" Aug 13 07:17:12.161572 kubelet[3358]: I0813 07:17:12.159233 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-69594ffd6c-rdm87" podStartSLOduration=27.431366585 podStartE2EDuration="34.159194969s" podCreationTimestamp="2025-08-13 07:16:38 +0000 UTC" firstStartedPulling="2025-08-13 07:17:04.283050124 +0000 UTC m=+47.865553818" lastFinishedPulling="2025-08-13 07:17:11.010878408 +0000 UTC m=+54.593382202" observedRunningTime="2025-08-13 07:17:12.153349292 +0000 UTC m=+55.735852986" watchObservedRunningTime="2025-08-13 07:17:12.159194969 +0000 UTC m=+55.741698663" Aug 13 07:17:12.355496 containerd[1819]: time="2025-08-13T07:17:12.355438260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:12.358033 containerd[1819]: time="2025-08-13T07:17:12.357954093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 07:17:12.362438 containerd[1819]: time="2025-08-13T07:17:12.362370751Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:12.367906 containerd[1819]: time="2025-08-13T07:17:12.367822423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:12.369042 containerd[1819]: time="2025-08-13T07:17:12.368526233Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.356937214s" Aug 13 07:17:12.369042 containerd[1819]: time="2025-08-13T07:17:12.368569333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 07:17:12.370001 containerd[1819]: time="2025-08-13T07:17:12.369965552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 07:17:12.371374 containerd[1819]: time="2025-08-13T07:17:12.371335570Z" level=info msg="CreateContainer within sandbox \"e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 07:17:12.406815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3833588242.mount: Deactivated successfully. Aug 13 07:17:12.415627 containerd[1819]: time="2025-08-13T07:17:12.415505553Z" level=info msg="CreateContainer within sandbox \"e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"70ca6e03cbaa85a8e5bad2f80ba94c1008a6339d5bf7b6af7037f0c2038b0b42\"" Aug 13 07:17:12.418291 containerd[1819]: time="2025-08-13T07:17:12.416680968Z" level=info msg="StartContainer for \"70ca6e03cbaa85a8e5bad2f80ba94c1008a6339d5bf7b6af7037f0c2038b0b42\"" Aug 13 07:17:12.495568 containerd[1819]: time="2025-08-13T07:17:12.495517509Z" level=info msg="StartContainer for \"70ca6e03cbaa85a8e5bad2f80ba94c1008a6339d5bf7b6af7037f0c2038b0b42\" returns successfully" Aug 13 07:17:15.278759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3753643197.mount: Deactivated successfully. Aug 13 07:17:16.132443 containerd[1819]: time="2025-08-13T07:17:16.132385454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:16.136846 containerd[1819]: time="2025-08-13T07:17:16.136765011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 07:17:16.140670 containerd[1819]: time="2025-08-13T07:17:16.140602261Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:16.146002 containerd[1819]: time="2025-08-13T07:17:16.145935430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:16.147350 containerd[1819]: time="2025-08-13T07:17:16.146769941Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.776758689s" Aug 13 07:17:16.147350 containerd[1819]: time="2025-08-13T07:17:16.146808142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 07:17:16.149007 containerd[1819]: time="2025-08-13T07:17:16.148272961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:17:16.149520 containerd[1819]: time="2025-08-13T07:17:16.149486077Z" level=info msg="CreateContainer within sandbox \"9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 07:17:16.205169 containerd[1819]: time="2025-08-13T07:17:16.205084101Z" level=info msg="CreateContainer within sandbox \"9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ae3e7d7a4ba55129a01ed3cd1f8090a1820e9608d41cdf1ee2c1aeeff6d5b99e\"" Aug 13 07:17:16.206976 containerd[1819]: time="2025-08-13T07:17:16.205733109Z" level=info msg="StartContainer for \"ae3e7d7a4ba55129a01ed3cd1f8090a1820e9608d41cdf1ee2c1aeeff6d5b99e\"" Aug 13 07:17:16.251331 systemd[1]: run-containerd-runc-k8s.io-ae3e7d7a4ba55129a01ed3cd1f8090a1820e9608d41cdf1ee2c1aeeff6d5b99e-runc.HoboOf.mount: Deactivated successfully. Aug 13 07:17:16.300524 containerd[1819]: time="2025-08-13T07:17:16.300441542Z" level=info msg="StartContainer for \"ae3e7d7a4ba55129a01ed3cd1f8090a1820e9608d41cdf1ee2c1aeeff6d5b99e\" returns successfully" Aug 13 07:17:16.781198 containerd[1819]: time="2025-08-13T07:17:16.780841898Z" level=info msg="StopPodSandbox for \"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\"" Aug 13 07:17:16.858012 containerd[1819]: 2025-08-13 07:17:16.816 [WARNING][5930] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0", GenerateName:"calico-apiserver-54888cfc75-", Namespace:"calico-apiserver", SelfLink:"", UID:"78949a8e-334b-48f9-97e4-1c8e76cdaa5a", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54888cfc75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc", Pod:"calico-apiserver-54888cfc75-2xbg6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c22137ada7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:16.858012 containerd[1819]: 2025-08-13 07:17:16.816 [INFO][5930] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Aug 13 07:17:16.858012 containerd[1819]: 2025-08-13 07:17:16.816 [INFO][5930] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" iface="eth0" netns="" Aug 13 07:17:16.858012 containerd[1819]: 2025-08-13 07:17:16.816 [INFO][5930] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Aug 13 07:17:16.858012 containerd[1819]: 2025-08-13 07:17:16.816 [INFO][5930] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Aug 13 07:17:16.858012 containerd[1819]: 2025-08-13 07:17:16.845 [INFO][5937] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" HandleID="k8s-pod-network.ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0" Aug 13 07:17:16.858012 containerd[1819]: 2025-08-13 07:17:16.846 [INFO][5937] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:16.858012 containerd[1819]: 2025-08-13 07:17:16.846 [INFO][5937] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:16.858012 containerd[1819]: 2025-08-13 07:17:16.853 [WARNING][5937] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" HandleID="k8s-pod-network.ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0" Aug 13 07:17:16.858012 containerd[1819]: 2025-08-13 07:17:16.853 [INFO][5937] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" HandleID="k8s-pod-network.ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0" Aug 13 07:17:16.858012 containerd[1819]: 2025-08-13 07:17:16.855 [INFO][5937] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:16.858012 containerd[1819]: 2025-08-13 07:17:16.857 [INFO][5930] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Aug 13 07:17:16.858892 containerd[1819]: time="2025-08-13T07:17:16.858107104Z" level=info msg="TearDown network for sandbox \"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\" successfully" Aug 13 07:17:16.858892 containerd[1819]: time="2025-08-13T07:17:16.858138304Z" level=info msg="StopPodSandbox for \"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\" returns successfully" Aug 13 07:17:16.858892 containerd[1819]: time="2025-08-13T07:17:16.858624611Z" level=info msg="RemovePodSandbox for \"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\"" Aug 13 07:17:16.858892 containerd[1819]: time="2025-08-13T07:17:16.858658011Z" level=info msg="Forcibly stopping sandbox \"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\"" Aug 13 07:17:16.929892 containerd[1819]: 2025-08-13 07:17:16.895 [WARNING][5951] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0", GenerateName:"calico-apiserver-54888cfc75-", Namespace:"calico-apiserver", SelfLink:"", UID:"78949a8e-334b-48f9-97e4-1c8e76cdaa5a", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54888cfc75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc", Pod:"calico-apiserver-54888cfc75-2xbg6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c22137ada7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:16.929892 containerd[1819]: 2025-08-13 07:17:16.896 [INFO][5951] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Aug 13 07:17:16.929892 containerd[1819]: 2025-08-13 07:17:16.896 [INFO][5951] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" iface="eth0" netns="" Aug 13 07:17:16.929892 containerd[1819]: 2025-08-13 07:17:16.896 [INFO][5951] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Aug 13 07:17:16.929892 containerd[1819]: 2025-08-13 07:17:16.896 [INFO][5951] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Aug 13 07:17:16.929892 containerd[1819]: 2025-08-13 07:17:16.919 [INFO][5958] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" HandleID="k8s-pod-network.ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0" Aug 13 07:17:16.929892 containerd[1819]: 2025-08-13 07:17:16.919 [INFO][5958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:16.929892 containerd[1819]: 2025-08-13 07:17:16.919 [INFO][5958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:16.929892 containerd[1819]: 2025-08-13 07:17:16.925 [WARNING][5958] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" HandleID="k8s-pod-network.ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0" Aug 13 07:17:16.929892 containerd[1819]: 2025-08-13 07:17:16.925 [INFO][5958] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" HandleID="k8s-pod-network.ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--2xbg6-eth0" Aug 13 07:17:16.929892 containerd[1819]: 2025-08-13 07:17:16.927 [INFO][5958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:16.929892 containerd[1819]: 2025-08-13 07:17:16.928 [INFO][5951] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2" Aug 13 07:17:16.930601 containerd[1819]: time="2025-08-13T07:17:16.929948740Z" level=info msg="TearDown network for sandbox \"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\" successfully" Aug 13 07:17:16.948129 containerd[1819]: time="2025-08-13T07:17:16.948074776Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:17:16.948321 containerd[1819]: time="2025-08-13T07:17:16.948184677Z" level=info msg="RemovePodSandbox \"ffe9aaa5189d67b1c0d64dfa787eeca907d3aa809e0ba4cfdc29276bef25bad2\" returns successfully" Aug 13 07:17:16.948843 containerd[1819]: time="2025-08-13T07:17:16.948812485Z" level=info msg="StopPodSandbox for \"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\"" Aug 13 07:17:17.017991 containerd[1819]: 2025-08-13 07:17:16.983 [WARNING][5972] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7b98a07a-1303-4fe2-9cdb-fec495674a1b", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21", Pod:"csi-node-driver-spflk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.22.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1931d33429c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:17.017991 containerd[1819]: 2025-08-13 07:17:16.983 [INFO][5972] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Aug 13 07:17:17.017991 containerd[1819]: 2025-08-13 07:17:16.983 [INFO][5972] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" iface="eth0" netns="" Aug 13 07:17:17.017991 containerd[1819]: 2025-08-13 07:17:16.983 [INFO][5972] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Aug 13 07:17:17.017991 containerd[1819]: 2025-08-13 07:17:16.983 [INFO][5972] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Aug 13 07:17:17.017991 containerd[1819]: 2025-08-13 07:17:17.007 [INFO][5979] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" HandleID="k8s-pod-network.62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Workload="ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0" Aug 13 07:17:17.017991 containerd[1819]: 2025-08-13 07:17:17.007 [INFO][5979] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:17.017991 containerd[1819]: 2025-08-13 07:17:17.007 [INFO][5979] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:17.017991 containerd[1819]: 2025-08-13 07:17:17.013 [WARNING][5979] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" HandleID="k8s-pod-network.62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Workload="ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0" Aug 13 07:17:17.017991 containerd[1819]: 2025-08-13 07:17:17.013 [INFO][5979] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" HandleID="k8s-pod-network.62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Workload="ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0" Aug 13 07:17:17.017991 containerd[1819]: 2025-08-13 07:17:17.015 [INFO][5979] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:17.017991 containerd[1819]: 2025-08-13 07:17:17.016 [INFO][5972] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Aug 13 07:17:17.018836 containerd[1819]: time="2025-08-13T07:17:17.018034487Z" level=info msg="TearDown network for sandbox \"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\" successfully" Aug 13 07:17:17.018836 containerd[1819]: time="2025-08-13T07:17:17.018064887Z" level=info msg="StopPodSandbox for \"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\" returns successfully" Aug 13 07:17:17.018836 containerd[1819]: time="2025-08-13T07:17:17.018735296Z" level=info msg="RemovePodSandbox for \"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\"" Aug 13 07:17:17.018836 containerd[1819]: time="2025-08-13T07:17:17.018770496Z" level=info msg="Forcibly stopping sandbox \"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\"" Aug 13 07:17:17.097412 containerd[1819]: 2025-08-13 07:17:17.060 [WARNING][5994] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7b98a07a-1303-4fe2-9cdb-fec495674a1b", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21", Pod:"csi-node-driver-spflk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.22.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1931d33429c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:17.097412 containerd[1819]: 2025-08-13 07:17:17.060 [INFO][5994] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Aug 13 07:17:17.097412 containerd[1819]: 2025-08-13 07:17:17.060 [INFO][5994] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" iface="eth0" netns="" Aug 13 07:17:17.097412 containerd[1819]: 2025-08-13 07:17:17.060 [INFO][5994] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Aug 13 07:17:17.097412 containerd[1819]: 2025-08-13 07:17:17.060 [INFO][5994] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Aug 13 07:17:17.097412 containerd[1819]: 2025-08-13 07:17:17.086 [INFO][6001] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" HandleID="k8s-pod-network.62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Workload="ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0" Aug 13 07:17:17.097412 containerd[1819]: 2025-08-13 07:17:17.086 [INFO][6001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:17.097412 containerd[1819]: 2025-08-13 07:17:17.086 [INFO][6001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:17.097412 containerd[1819]: 2025-08-13 07:17:17.092 [WARNING][6001] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" HandleID="k8s-pod-network.62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Workload="ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0" Aug 13 07:17:17.097412 containerd[1819]: 2025-08-13 07:17:17.092 [INFO][6001] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" HandleID="k8s-pod-network.62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Workload="ci--4081.3.5--a--0c3b310332-k8s-csi--node--driver--spflk-eth0" Aug 13 07:17:17.097412 containerd[1819]: 2025-08-13 07:17:17.094 [INFO][6001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:17.097412 containerd[1819]: 2025-08-13 07:17:17.096 [INFO][5994] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae" Aug 13 07:17:17.098243 containerd[1819]: time="2025-08-13T07:17:17.097524422Z" level=info msg="TearDown network for sandbox \"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\" successfully" Aug 13 07:17:17.108827 containerd[1819]: time="2025-08-13T07:17:17.108754768Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:17:17.109006 containerd[1819]: time="2025-08-13T07:17:17.108839469Z" level=info msg="RemovePodSandbox \"62ca9a1058cdf4455d4a26dddae22719d9797f477cfa6811262482ae5ae197ae\" returns successfully" Aug 13 07:17:17.109473 containerd[1819]: time="2025-08-13T07:17:17.109442977Z" level=info msg="StopPodSandbox for \"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\"" Aug 13 07:17:17.196835 kubelet[3358]: I0813 07:17:17.196623 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-ppgrt" podStartSLOduration=29.765911675 podStartE2EDuration="40.196597112s" podCreationTimestamp="2025-08-13 07:16:37 +0000 UTC" firstStartedPulling="2025-08-13 07:17:05.717003116 +0000 UTC m=+49.299506810" lastFinishedPulling="2025-08-13 07:17:16.147688553 +0000 UTC m=+59.730192247" observedRunningTime="2025-08-13 07:17:17.195536398 +0000 UTC m=+60.778040092" watchObservedRunningTime="2025-08-13 07:17:17.196597112 +0000 UTC m=+60.779100806" Aug 13 07:17:17.240338 containerd[1819]: 2025-08-13 07:17:17.164 [WARNING][6015] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"2d672f67-c714-47a4-acde-08f33595be3a", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985", Pod:"goldmane-58fd7646b9-ppgrt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.22.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6186c38648b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:17.240338 containerd[1819]: 2025-08-13 07:17:17.164 [INFO][6015] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Aug 13 07:17:17.240338 containerd[1819]: 2025-08-13 07:17:17.164 [INFO][6015] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" iface="eth0" netns="" Aug 13 07:17:17.240338 containerd[1819]: 2025-08-13 07:17:17.165 [INFO][6015] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Aug 13 07:17:17.240338 containerd[1819]: 2025-08-13 07:17:17.165 [INFO][6015] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Aug 13 07:17:17.240338 containerd[1819]: 2025-08-13 07:17:17.220 [INFO][6022] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" HandleID="k8s-pod-network.c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Workload="ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0" Aug 13 07:17:17.240338 containerd[1819]: 2025-08-13 07:17:17.220 [INFO][6022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:17.240338 containerd[1819]: 2025-08-13 07:17:17.220 [INFO][6022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:17.240338 containerd[1819]: 2025-08-13 07:17:17.228 [WARNING][6022] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" HandleID="k8s-pod-network.c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Workload="ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0" Aug 13 07:17:17.240338 containerd[1819]: 2025-08-13 07:17:17.228 [INFO][6022] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" HandleID="k8s-pod-network.c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Workload="ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0" Aug 13 07:17:17.240338 containerd[1819]: 2025-08-13 07:17:17.231 [INFO][6022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:17.240338 containerd[1819]: 2025-08-13 07:17:17.235 [INFO][6015] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Aug 13 07:17:17.241812 containerd[1819]: time="2025-08-13T07:17:17.240389182Z" level=info msg="TearDown network for sandbox \"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\" successfully" Aug 13 07:17:17.241812 containerd[1819]: time="2025-08-13T07:17:17.240420283Z" level=info msg="StopPodSandbox for \"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\" returns successfully" Aug 13 07:17:17.242733 containerd[1819]: time="2025-08-13T07:17:17.242692812Z" level=info msg="RemovePodSandbox for \"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\"" Aug 13 07:17:17.242834 containerd[1819]: time="2025-08-13T07:17:17.242739913Z" level=info msg="Forcibly stopping sandbox \"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\"" Aug 13 07:17:17.367300 containerd[1819]: 2025-08-13 07:17:17.314 [WARNING][6039] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"2d672f67-c714-47a4-acde-08f33595be3a", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"9d98ed1120e535f3bd9c037bef06441c4837d75efced9c073c28b45bf6e06985", Pod:"goldmane-58fd7646b9-ppgrt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.22.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6186c38648b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:17.367300 containerd[1819]: 2025-08-13 07:17:17.314 [INFO][6039] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Aug 13 07:17:17.367300 containerd[1819]: 2025-08-13 07:17:17.314 [INFO][6039] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" iface="eth0" netns="" Aug 13 07:17:17.367300 containerd[1819]: 2025-08-13 07:17:17.314 [INFO][6039] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Aug 13 07:17:17.367300 containerd[1819]: 2025-08-13 07:17:17.315 [INFO][6039] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Aug 13 07:17:17.367300 containerd[1819]: 2025-08-13 07:17:17.350 [INFO][6046] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" HandleID="k8s-pod-network.c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Workload="ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0" Aug 13 07:17:17.367300 containerd[1819]: 2025-08-13 07:17:17.350 [INFO][6046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:17.367300 containerd[1819]: 2025-08-13 07:17:17.350 [INFO][6046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:17.367300 containerd[1819]: 2025-08-13 07:17:17.360 [WARNING][6046] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" HandleID="k8s-pod-network.c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Workload="ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0" Aug 13 07:17:17.367300 containerd[1819]: 2025-08-13 07:17:17.360 [INFO][6046] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" HandleID="k8s-pod-network.c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Workload="ci--4081.3.5--a--0c3b310332-k8s-goldmane--58fd7646b9--ppgrt-eth0" Aug 13 07:17:17.367300 containerd[1819]: 2025-08-13 07:17:17.362 [INFO][6046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:17.367300 containerd[1819]: 2025-08-13 07:17:17.364 [INFO][6039] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b" Aug 13 07:17:17.369888 containerd[1819]: time="2025-08-13T07:17:17.368909956Z" level=info msg="TearDown network for sandbox \"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\" successfully" Aug 13 07:17:17.602179 containerd[1819]: time="2025-08-13T07:17:17.601978791Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:17:17.602179 containerd[1819]: time="2025-08-13T07:17:17.602097992Z" level=info msg="RemovePodSandbox \"c5e454238e11390df38c9fd669a430b6cec848c2e7332a41395f1cc7d571e18b\" returns successfully" Aug 13 07:17:17.603303 containerd[1819]: time="2025-08-13T07:17:17.603257007Z" level=info msg="StopPodSandbox for \"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\"" Aug 13 07:17:17.682993 containerd[1819]: 2025-08-13 07:17:17.642 [WARNING][6063] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"86478e9f-1c1b-4a14-b1bf-da6e5b798f3d", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0", Pod:"coredns-7c65d6cfc9-rqkdd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali979770affa5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:17.682993 containerd[1819]: 2025-08-13 07:17:17.642 [INFO][6063] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Aug 13 07:17:17.682993 containerd[1819]: 2025-08-13 07:17:17.642 [INFO][6063] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" iface="eth0" netns="" Aug 13 07:17:17.682993 containerd[1819]: 2025-08-13 07:17:17.642 [INFO][6063] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Aug 13 07:17:17.682993 containerd[1819]: 2025-08-13 07:17:17.642 [INFO][6063] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Aug 13 07:17:17.682993 containerd[1819]: 2025-08-13 07:17:17.666 [INFO][6070] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" HandleID="k8s-pod-network.9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0" Aug 13 07:17:17.682993 containerd[1819]: 2025-08-13 07:17:17.667 [INFO][6070] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:17.682993 containerd[1819]: 2025-08-13 07:17:17.667 [INFO][6070] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:17.682993 containerd[1819]: 2025-08-13 07:17:17.673 [WARNING][6070] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" HandleID="k8s-pod-network.9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0" Aug 13 07:17:17.682993 containerd[1819]: 2025-08-13 07:17:17.674 [INFO][6070] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" HandleID="k8s-pod-network.9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0" Aug 13 07:17:17.682993 containerd[1819]: 2025-08-13 07:17:17.680 [INFO][6070] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:17.682993 containerd[1819]: 2025-08-13 07:17:17.681 [INFO][6063] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Aug 13 07:17:17.682993 containerd[1819]: time="2025-08-13T07:17:17.682950545Z" level=info msg="TearDown network for sandbox \"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\" successfully" Aug 13 07:17:17.682993 containerd[1819]: time="2025-08-13T07:17:17.682982245Z" level=info msg="StopPodSandbox for \"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\" returns successfully" Aug 13 07:17:17.684169 containerd[1819]: time="2025-08-13T07:17:17.683598553Z" level=info msg="RemovePodSandbox for \"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\"" Aug 13 07:17:17.684169 containerd[1819]: time="2025-08-13T07:17:17.683632754Z" level=info msg="Forcibly stopping sandbox \"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\"" Aug 13 07:17:17.751384 containerd[1819]: 2025-08-13 07:17:17.719 [WARNING][6084] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"86478e9f-1c1b-4a14-b1bf-da6e5b798f3d", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"8024f811291a137e23be0552f41d8c919288411a2ba38dad4dda8300ec37d3a0", Pod:"coredns-7c65d6cfc9-rqkdd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali979770affa5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:17.751384 containerd[1819]: 2025-08-13 07:17:17.719 [INFO][6084] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Aug 13 07:17:17.751384 containerd[1819]: 2025-08-13 07:17:17.719 [INFO][6084] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" iface="eth0" netns="" Aug 13 07:17:17.751384 containerd[1819]: 2025-08-13 07:17:17.719 [INFO][6084] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Aug 13 07:17:17.751384 containerd[1819]: 2025-08-13 07:17:17.719 [INFO][6084] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Aug 13 07:17:17.751384 containerd[1819]: 2025-08-13 07:17:17.739 [INFO][6091] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" HandleID="k8s-pod-network.9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0" Aug 13 07:17:17.751384 containerd[1819]: 2025-08-13 07:17:17.740 [INFO][6091] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:17.751384 containerd[1819]: 2025-08-13 07:17:17.740 [INFO][6091] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:17.751384 containerd[1819]: 2025-08-13 07:17:17.746 [WARNING][6091] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" HandleID="k8s-pod-network.9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0" Aug 13 07:17:17.751384 containerd[1819]: 2025-08-13 07:17:17.746 [INFO][6091] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" HandleID="k8s-pod-network.9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--rqkdd-eth0" Aug 13 07:17:17.751384 containerd[1819]: 2025-08-13 07:17:17.748 [INFO][6091] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:17.751384 containerd[1819]: 2025-08-13 07:17:17.749 [INFO][6084] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905" Aug 13 07:17:17.752053 containerd[1819]: time="2025-08-13T07:17:17.751438137Z" level=info msg="TearDown network for sandbox \"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\" successfully" Aug 13 07:17:18.212585 containerd[1819]: time="2025-08-13T07:17:18.212521441Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:17:18.215177 containerd[1819]: time="2025-08-13T07:17:18.212618042Z" level=info msg="RemovePodSandbox \"9f1655a546d66ef7bff5ff783e0e73512cee4b3eccc849c928b8686ea6b28905\" returns successfully" Aug 13 07:17:18.215177 containerd[1819]: time="2025-08-13T07:17:18.213327052Z" level=info msg="StopPodSandbox for \"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\"" Aug 13 07:17:18.328387 containerd[1819]: 2025-08-13 07:17:18.275 [WARNING][6124] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8e73f852-6c7b-406b-995e-adedee6784ab", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394", Pod:"coredns-7c65d6cfc9-b4tm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie722b9a935f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:18.328387 containerd[1819]: 2025-08-13 07:17:18.275 [INFO][6124] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Aug 13 07:17:18.328387 containerd[1819]: 2025-08-13 07:17:18.275 [INFO][6124] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" iface="eth0" netns="" Aug 13 07:17:18.328387 containerd[1819]: 2025-08-13 07:17:18.275 [INFO][6124] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Aug 13 07:17:18.328387 containerd[1819]: 2025-08-13 07:17:18.275 [INFO][6124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Aug 13 07:17:18.328387 containerd[1819]: 2025-08-13 07:17:18.308 [INFO][6132] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" HandleID="k8s-pod-network.7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0" Aug 13 07:17:18.328387 containerd[1819]: 2025-08-13 07:17:18.308 [INFO][6132] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:18.328387 containerd[1819]: 2025-08-13 07:17:18.308 [INFO][6132] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:18.328387 containerd[1819]: 2025-08-13 07:17:18.323 [WARNING][6132] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" HandleID="k8s-pod-network.7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0" Aug 13 07:17:18.328387 containerd[1819]: 2025-08-13 07:17:18.324 [INFO][6132] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" HandleID="k8s-pod-network.7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0" Aug 13 07:17:18.328387 containerd[1819]: 2025-08-13 07:17:18.325 [INFO][6132] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:18.328387 containerd[1819]: 2025-08-13 07:17:18.327 [INFO][6124] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Aug 13 07:17:18.329756 containerd[1819]: time="2025-08-13T07:17:18.328431750Z" level=info msg="TearDown network for sandbox \"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\" successfully" Aug 13 07:17:18.329756 containerd[1819]: time="2025-08-13T07:17:18.328462951Z" level=info msg="StopPodSandbox for \"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\" returns successfully" Aug 13 07:17:18.329756 containerd[1819]: time="2025-08-13T07:17:18.329084659Z" level=info msg="RemovePodSandbox for \"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\"" Aug 13 07:17:18.329756 containerd[1819]: time="2025-08-13T07:17:18.329118059Z" level=info msg="Forcibly stopping sandbox \"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\"" Aug 13 07:17:18.442252 containerd[1819]: 2025-08-13 07:17:18.380 [WARNING][6147] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8e73f852-6c7b-406b-995e-adedee6784ab", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"d8ffd732c02116c29a08192b9812722a574fb779c8c19823dee5233e10566394", Pod:"coredns-7c65d6cfc9-b4tm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie722b9a935f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:18.442252 containerd[1819]: 2025-08-13 07:17:18.380 [INFO][6147] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Aug 13 07:17:18.442252 containerd[1819]: 2025-08-13 07:17:18.380 [INFO][6147] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" iface="eth0" netns="" Aug 13 07:17:18.442252 containerd[1819]: 2025-08-13 07:17:18.380 [INFO][6147] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Aug 13 07:17:18.442252 containerd[1819]: 2025-08-13 07:17:18.380 [INFO][6147] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Aug 13 07:17:18.442252 containerd[1819]: 2025-08-13 07:17:18.424 [INFO][6158] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" HandleID="k8s-pod-network.7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0" Aug 13 07:17:18.442252 containerd[1819]: 2025-08-13 07:17:18.424 [INFO][6158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:18.442252 containerd[1819]: 2025-08-13 07:17:18.424 [INFO][6158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:18.442252 containerd[1819]: 2025-08-13 07:17:18.433 [WARNING][6158] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" HandleID="k8s-pod-network.7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0" Aug 13 07:17:18.442252 containerd[1819]: 2025-08-13 07:17:18.433 [INFO][6158] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" HandleID="k8s-pod-network.7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Workload="ci--4081.3.5--a--0c3b310332-k8s-coredns--7c65d6cfc9--b4tm5-eth0" Aug 13 07:17:18.442252 containerd[1819]: 2025-08-13 07:17:18.436 [INFO][6158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:18.442252 containerd[1819]: 2025-08-13 07:17:18.439 [INFO][6147] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128" Aug 13 07:17:18.443754 containerd[1819]: time="2025-08-13T07:17:18.442302533Z" level=info msg="TearDown network for sandbox \"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\" successfully" Aug 13 07:17:18.994840 systemd[1]: run-containerd-runc-k8s.io-ae3e7d7a4ba55129a01ed3cd1f8090a1820e9608d41cdf1ee2c1aeeff6d5b99e-runc.9XMNgg.mount: Deactivated successfully. Aug 13 07:17:21.447480 containerd[1819]: time="2025-08-13T07:17:21.447422040Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:17:21.448045 containerd[1819]: time="2025-08-13T07:17:21.447516441Z" level=info msg="RemovePodSandbox \"7c84d8ae3fa63d04b3fbdf6c800953f159488fec01a345340d7b5d0545778128\" returns successfully" Aug 13 07:17:21.449192 containerd[1819]: time="2025-08-13T07:17:21.448087849Z" level=info msg="StopPodSandbox for \"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\"" Aug 13 07:17:21.526385 containerd[1819]: 2025-08-13 07:17:21.489 [WARNING][6255] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-whisker--788d4f8bd5--2tmfw-eth0" Aug 13 07:17:21.526385 containerd[1819]: 2025-08-13 07:17:21.489 [INFO][6255] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Aug 13 07:17:21.526385 containerd[1819]: 2025-08-13 07:17:21.489 [INFO][6255] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" iface="eth0" netns="" Aug 13 07:17:21.526385 containerd[1819]: 2025-08-13 07:17:21.489 [INFO][6255] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Aug 13 07:17:21.526385 containerd[1819]: 2025-08-13 07:17:21.489 [INFO][6255] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Aug 13 07:17:21.526385 containerd[1819]: 2025-08-13 07:17:21.515 [INFO][6262] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" HandleID="k8s-pod-network.fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Workload="ci--4081.3.5--a--0c3b310332-k8s-whisker--788d4f8bd5--2tmfw-eth0" Aug 13 07:17:21.526385 containerd[1819]: 2025-08-13 07:17:21.515 [INFO][6262] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:21.526385 containerd[1819]: 2025-08-13 07:17:21.515 [INFO][6262] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:21.526385 containerd[1819]: 2025-08-13 07:17:21.522 [WARNING][6262] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" HandleID="k8s-pod-network.fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Workload="ci--4081.3.5--a--0c3b310332-k8s-whisker--788d4f8bd5--2tmfw-eth0" Aug 13 07:17:21.526385 containerd[1819]: 2025-08-13 07:17:21.522 [INFO][6262] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" HandleID="k8s-pod-network.fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Workload="ci--4081.3.5--a--0c3b310332-k8s-whisker--788d4f8bd5--2tmfw-eth0" Aug 13 07:17:21.526385 containerd[1819]: 2025-08-13 07:17:21.524 [INFO][6262] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:21.526385 containerd[1819]: 2025-08-13 07:17:21.525 [INFO][6255] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Aug 13 07:17:21.527102 containerd[1819]: time="2025-08-13T07:17:21.526985372Z" level=info msg="TearDown network for sandbox \"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\" successfully" Aug 13 07:17:21.527244 containerd[1819]: time="2025-08-13T07:17:21.527197174Z" level=info msg="StopPodSandbox for \"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\" returns successfully" Aug 13 07:17:21.527848 containerd[1819]: time="2025-08-13T07:17:21.527813482Z" level=info msg="RemovePodSandbox for \"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\"" Aug 13 07:17:21.527957 containerd[1819]: time="2025-08-13T07:17:21.527852683Z" level=info msg="Forcibly stopping sandbox \"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\"" Aug 13 07:17:21.537071 containerd[1819]: time="2025-08-13T07:17:21.536999201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 07:17:21.537718 containerd[1819]: time="2025-08-13T07:17:21.537679210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:21.540070 containerd[1819]: time="2025-08-13T07:17:21.540035041Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:21.544175 containerd[1819]: time="2025-08-13T07:17:21.543099981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:21.544309 containerd[1819]: time="2025-08-13T07:17:21.544228095Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 5.395915534s" Aug 13 07:17:21.544309 containerd[1819]: time="2025-08-13T07:17:21.544276496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:17:21.545993 containerd[1819]: time="2025-08-13T07:17:21.545958618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:17:21.549099 containerd[1819]: time="2025-08-13T07:17:21.549046758Z" level=info msg="CreateContainer within sandbox \"2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:17:21.592497 containerd[1819]: time="2025-08-13T07:17:21.592437820Z" level=info msg="CreateContainer within sandbox \"2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7b7217cd42891ce85cd3e0d983c13fb14b80f852654c511e59c35f52816ce0b0\"" Aug 13 07:17:21.594726 containerd[1819]: time="2025-08-13T07:17:21.593706537Z" level=info msg="StartContainer for \"7b7217cd42891ce85cd3e0d983c13fb14b80f852654c511e59c35f52816ce0b0\"" Aug 13 07:17:21.661264 containerd[1819]: 2025-08-13 07:17:21.602 [WARNING][6280] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" WorkloadEndpoint="ci--4081.3.5--a--0c3b310332-k8s-whisker--788d4f8bd5--2tmfw-eth0" Aug 13 07:17:21.661264 containerd[1819]: 2025-08-13 07:17:21.602 [INFO][6280] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Aug 13 07:17:21.661264 containerd[1819]: 2025-08-13 07:17:21.602 [INFO][6280] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" iface="eth0" netns="" Aug 13 07:17:21.661264 containerd[1819]: 2025-08-13 07:17:21.602 [INFO][6280] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Aug 13 07:17:21.661264 containerd[1819]: 2025-08-13 07:17:21.602 [INFO][6280] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Aug 13 07:17:21.661264 containerd[1819]: 2025-08-13 07:17:21.643 [INFO][6292] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" HandleID="k8s-pod-network.fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Workload="ci--4081.3.5--a--0c3b310332-k8s-whisker--788d4f8bd5--2tmfw-eth0" Aug 13 07:17:21.661264 containerd[1819]: 2025-08-13 07:17:21.644 [INFO][6292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:21.661264 containerd[1819]: 2025-08-13 07:17:21.644 [INFO][6292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:21.661264 containerd[1819]: 2025-08-13 07:17:21.653 [WARNING][6292] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" HandleID="k8s-pod-network.fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Workload="ci--4081.3.5--a--0c3b310332-k8s-whisker--788d4f8bd5--2tmfw-eth0" Aug 13 07:17:21.661264 containerd[1819]: 2025-08-13 07:17:21.653 [INFO][6292] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" HandleID="k8s-pod-network.fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Workload="ci--4081.3.5--a--0c3b310332-k8s-whisker--788d4f8bd5--2tmfw-eth0" Aug 13 07:17:21.661264 containerd[1819]: 2025-08-13 07:17:21.656 [INFO][6292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:21.661264 containerd[1819]: 2025-08-13 07:17:21.657 [INFO][6280] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95" Aug 13 07:17:21.661264 containerd[1819]: time="2025-08-13T07:17:21.660365001Z" level=info msg="TearDown network for sandbox \"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\" successfully" Aug 13 07:17:21.670001 containerd[1819]: time="2025-08-13T07:17:21.669946925Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:17:21.670264 containerd[1819]: time="2025-08-13T07:17:21.670241129Z" level=info msg="RemovePodSandbox \"fd710f00a8b32c71fdb4e357a9385813e8920ac0c6f7b34e10eaefb745fabb95\" returns successfully" Aug 13 07:17:21.671031 containerd[1819]: time="2025-08-13T07:17:21.670996139Z" level=info msg="StopPodSandbox for \"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\"" Aug 13 07:17:21.706230 containerd[1819]: time="2025-08-13T07:17:21.703919066Z" level=info msg="StartContainer for \"7b7217cd42891ce85cd3e0d983c13fb14b80f852654c511e59c35f52816ce0b0\" returns successfully" Aug 13 07:17:21.792161 containerd[1819]: 2025-08-13 07:17:21.734 [WARNING][6324] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0", GenerateName:"calico-kube-controllers-69594ffd6c-", Namespace:"calico-system", SelfLink:"", UID:"0df5d788-89a4-43a2-8ef9-a2eb77539527", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69594ffd6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5", Pod:"calico-kube-controllers-69594ffd6c-rdm87", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali30001e72ce6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:21.792161 containerd[1819]: 2025-08-13 07:17:21.734 [INFO][6324] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Aug 13 07:17:21.792161 containerd[1819]: 2025-08-13 07:17:21.734 [INFO][6324] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" iface="eth0" netns="" Aug 13 07:17:21.792161 containerd[1819]: 2025-08-13 07:17:21.734 [INFO][6324] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Aug 13 07:17:21.792161 containerd[1819]: 2025-08-13 07:17:21.734 [INFO][6324] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Aug 13 07:17:21.792161 containerd[1819]: 2025-08-13 07:17:21.773 [INFO][6340] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" HandleID="k8s-pod-network.d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0" Aug 13 07:17:21.792161 containerd[1819]: 2025-08-13 07:17:21.774 [INFO][6340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:21.792161 containerd[1819]: 2025-08-13 07:17:21.774 [INFO][6340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:21.792161 containerd[1819]: 2025-08-13 07:17:21.785 [WARNING][6340] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" HandleID="k8s-pod-network.d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0" Aug 13 07:17:21.792161 containerd[1819]: 2025-08-13 07:17:21.785 [INFO][6340] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" HandleID="k8s-pod-network.d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0" Aug 13 07:17:21.792161 containerd[1819]: 2025-08-13 07:17:21.788 [INFO][6340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:21.792161 containerd[1819]: 2025-08-13 07:17:21.790 [INFO][6324] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Aug 13 07:17:21.794929 containerd[1819]: time="2025-08-13T07:17:21.792214211Z" level=info msg="TearDown network for sandbox \"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\" successfully" Aug 13 07:17:21.794929 containerd[1819]: time="2025-08-13T07:17:21.792248411Z" level=info msg="StopPodSandbox for \"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\" returns successfully" Aug 13 07:17:21.794929 containerd[1819]: time="2025-08-13T07:17:21.792935220Z" level=info msg="RemovePodSandbox for \"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\"" Aug 13 07:17:21.794929 containerd[1819]: time="2025-08-13T07:17:21.792969921Z" level=info msg="Forcibly stopping sandbox \"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\"" Aug 13 07:17:21.905952 containerd[1819]: 2025-08-13 07:17:21.852 [WARNING][6359] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0", GenerateName:"calico-kube-controllers-69594ffd6c-", Namespace:"calico-system", SelfLink:"", UID:"0df5d788-89a4-43a2-8ef9-a2eb77539527", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69594ffd6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"72fbe77b6ee99d83930b1ac93d1b0d54130c7c3109fb709a2d643eec865cb7f5", Pod:"calico-kube-controllers-69594ffd6c-rdm87", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali30001e72ce6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:21.905952 containerd[1819]: 2025-08-13 07:17:21.852 [INFO][6359] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Aug 13 07:17:21.905952 containerd[1819]: 2025-08-13 07:17:21.853 [INFO][6359] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" iface="eth0" netns="" Aug 13 07:17:21.905952 containerd[1819]: 2025-08-13 07:17:21.853 [INFO][6359] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Aug 13 07:17:21.905952 containerd[1819]: 2025-08-13 07:17:21.853 [INFO][6359] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Aug 13 07:17:21.905952 containerd[1819]: 2025-08-13 07:17:21.890 [INFO][6367] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" HandleID="k8s-pod-network.d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0" Aug 13 07:17:21.905952 containerd[1819]: 2025-08-13 07:17:21.891 [INFO][6367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:21.905952 containerd[1819]: 2025-08-13 07:17:21.891 [INFO][6367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:21.905952 containerd[1819]: 2025-08-13 07:17:21.899 [WARNING][6367] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" HandleID="k8s-pod-network.d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0" Aug 13 07:17:21.905952 containerd[1819]: 2025-08-13 07:17:21.899 [INFO][6367] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" HandleID="k8s-pod-network.d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--kube--controllers--69594ffd6c--rdm87-eth0" Aug 13 07:17:21.905952 containerd[1819]: 2025-08-13 07:17:21.901 [INFO][6367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:21.905952 containerd[1819]: 2025-08-13 07:17:21.903 [INFO][6359] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f" Aug 13 07:17:21.906698 containerd[1819]: time="2025-08-13T07:17:21.906012587Z" level=info msg="TearDown network for sandbox \"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\" successfully" Aug 13 07:17:21.923614 containerd[1819]: time="2025-08-13T07:17:21.923557314Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:17:21.923784 containerd[1819]: time="2025-08-13T07:17:21.923653315Z" level=info msg="RemovePodSandbox \"d45cdb5288d6e7aa98a43df96f9c370c73f4f367e121662fa0f3d604937d979f\" returns successfully" Aug 13 07:17:21.924655 containerd[1819]: time="2025-08-13T07:17:21.924289924Z" level=info msg="StopPodSandbox for \"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\"" Aug 13 07:17:21.935570 containerd[1819]: time="2025-08-13T07:17:21.934689659Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:21.940704 containerd[1819]: time="2025-08-13T07:17:21.940639936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 07:17:21.944853 containerd[1819]: time="2025-08-13T07:17:21.944797490Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 398.793871ms" Aug 13 07:17:21.945084 containerd[1819]: time="2025-08-13T07:17:21.945063993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:17:21.947755 containerd[1819]: time="2025-08-13T07:17:21.947486424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 07:17:21.948858 containerd[1819]: time="2025-08-13T07:17:21.948695540Z" level=info msg="CreateContainer within sandbox \"e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:17:21.993773 containerd[1819]: time="2025-08-13T07:17:21.993621823Z" level=info msg="CreateContainer within sandbox \"e5bd26bebbacb304307f134ef70d6faaad9ffceec934c8c373f2db42d28066cc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0046ffc558b5485585b4566e3ade6b94280890a89659d5fde3dd3b42af7a059e\"" Aug 13 07:17:21.997478 containerd[1819]: time="2025-08-13T07:17:21.997431272Z" level=info msg="StartContainer for \"0046ffc558b5485585b4566e3ade6b94280890a89659d5fde3dd3b42af7a059e\"" Aug 13 07:17:22.114173 containerd[1819]: 2025-08-13 07:17:22.004 [WARNING][6381] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0", GenerateName:"calico-apiserver-54888cfc75-", Namespace:"calico-apiserver", SelfLink:"", UID:"892537e6-36f9-4f24-929e-348869a4ec0b", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54888cfc75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250", Pod:"calico-apiserver-54888cfc75-zm9lk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cc0d6085b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:22.114173 containerd[1819]: 2025-08-13 07:17:22.005 [INFO][6381] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Aug 13 07:17:22.114173 containerd[1819]: 2025-08-13 07:17:22.006 [INFO][6381] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" iface="eth0" netns="" Aug 13 07:17:22.114173 containerd[1819]: 2025-08-13 07:17:22.006 [INFO][6381] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Aug 13 07:17:22.114173 containerd[1819]: 2025-08-13 07:17:22.006 [INFO][6381] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Aug 13 07:17:22.114173 containerd[1819]: 2025-08-13 07:17:22.086 [INFO][6392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" HandleID="k8s-pod-network.91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0" Aug 13 07:17:22.114173 containerd[1819]: 2025-08-13 07:17:22.089 [INFO][6392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:22.114173 containerd[1819]: 2025-08-13 07:17:22.089 [INFO][6392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:22.114173 containerd[1819]: 2025-08-13 07:17:22.097 [WARNING][6392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" HandleID="k8s-pod-network.91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0" Aug 13 07:17:22.114173 containerd[1819]: 2025-08-13 07:17:22.097 [INFO][6392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" HandleID="k8s-pod-network.91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0" Aug 13 07:17:22.114173 containerd[1819]: 2025-08-13 07:17:22.099 [INFO][6392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:22.114173 containerd[1819]: 2025-08-13 07:17:22.102 [INFO][6381] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Aug 13 07:17:22.114173 containerd[1819]: time="2025-08-13T07:17:22.112557865Z" level=info msg="TearDown network for sandbox \"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\" successfully" Aug 13 07:17:22.114173 containerd[1819]: time="2025-08-13T07:17:22.112589265Z" level=info msg="StopPodSandbox for \"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\" returns successfully" Aug 13 07:17:22.116759 containerd[1819]: time="2025-08-13T07:17:22.115417402Z" level=info msg="RemovePodSandbox for \"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\"" Aug 13 07:17:22.116759 containerd[1819]: time="2025-08-13T07:17:22.116248413Z" level=info msg="Forcibly stopping sandbox \"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\"" Aug 13 07:17:22.233247 containerd[1819]: time="2025-08-13T07:17:22.232620522Z" level=info msg="StartContainer for \"0046ffc558b5485585b4566e3ade6b94280890a89659d5fde3dd3b42af7a059e\" returns successfully" Aug 13 07:17:22.345593 containerd[1819]: 2025-08-13 07:17:22.243 [WARNING][6431] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0", GenerateName:"calico-apiserver-54888cfc75-", Namespace:"calico-apiserver", SelfLink:"", UID:"892537e6-36f9-4f24-929e-348869a4ec0b", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54888cfc75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-0c3b310332", ContainerID:"2f8f8bcbc72b81d4b0dbe78d8c90f7a36c2dd83e172fc742cb955801877a6250", Pod:"calico-apiserver-54888cfc75-zm9lk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cc0d6085b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:22.345593 containerd[1819]: 2025-08-13 07:17:22.245 [INFO][6431] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Aug 13 07:17:22.345593 containerd[1819]: 2025-08-13 07:17:22.245 [INFO][6431] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" iface="eth0" netns="" Aug 13 07:17:22.345593 containerd[1819]: 2025-08-13 07:17:22.245 [INFO][6431] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Aug 13 07:17:22.345593 containerd[1819]: 2025-08-13 07:17:22.245 [INFO][6431] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Aug 13 07:17:22.345593 containerd[1819]: 2025-08-13 07:17:22.327 [INFO][6448] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" HandleID="k8s-pod-network.91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0" Aug 13 07:17:22.345593 containerd[1819]: 2025-08-13 07:17:22.327 [INFO][6448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:22.345593 containerd[1819]: 2025-08-13 07:17:22.327 [INFO][6448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:22.345593 containerd[1819]: 2025-08-13 07:17:22.335 [WARNING][6448] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" HandleID="k8s-pod-network.91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0" Aug 13 07:17:22.345593 containerd[1819]: 2025-08-13 07:17:22.335 [INFO][6448] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" HandleID="k8s-pod-network.91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Workload="ci--4081.3.5--a--0c3b310332-k8s-calico--apiserver--54888cfc75--zm9lk-eth0" Aug 13 07:17:22.345593 containerd[1819]: 2025-08-13 07:17:22.338 [INFO][6448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:22.345593 containerd[1819]: 2025-08-13 07:17:22.343 [INFO][6431] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72" Aug 13 07:17:22.345593 containerd[1819]: time="2025-08-13T07:17:22.344978479Z" level=info msg="TearDown network for sandbox \"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\" successfully" Aug 13 07:17:22.357614 containerd[1819]: time="2025-08-13T07:17:22.357560142Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:17:22.357759 containerd[1819]: time="2025-08-13T07:17:22.357658543Z" level=info msg="RemovePodSandbox \"91d38b6321cad9b43356becd7b4d7235bd5d7c5b7574743c49fa7793b357ae72\" returns successfully" Aug 13 07:17:23.216176 kubelet[3358]: I0813 07:17:23.215675 3358 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:17:23.233601 kubelet[3358]: I0813 07:17:23.233511 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54888cfc75-zm9lk" podStartSLOduration=35.367667814 podStartE2EDuration="50.2334813s" podCreationTimestamp="2025-08-13 07:16:33 +0000 UTC" firstStartedPulling="2025-08-13 07:17:06.679581624 +0000 UTC m=+50.262085318" lastFinishedPulling="2025-08-13 07:17:21.54539511 +0000 UTC m=+65.127898804" observedRunningTime="2025-08-13 07:17:22.224067911 +0000 UTC m=+65.806571605" watchObservedRunningTime="2025-08-13 07:17:23.2334813 +0000 UTC m=+66.815984994" Aug 13 07:17:23.529091 kubelet[3358]: I0813 07:17:23.528526 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54888cfc75-2xbg6" podStartSLOduration=36.105312678 podStartE2EDuration="50.528500026s" podCreationTimestamp="2025-08-13 07:16:33 +0000 UTC" firstStartedPulling="2025-08-13 07:17:07.522822857 +0000 UTC m=+51.105326551" lastFinishedPulling="2025-08-13 07:17:21.946010105 +0000 UTC m=+65.528513899" observedRunningTime="2025-08-13 07:17:23.237390651 +0000 UTC m=+66.819894345" watchObservedRunningTime="2025-08-13 07:17:23.528500026 +0000 UTC m=+67.111003720" Aug 13 07:17:23.681277 containerd[1819]: time="2025-08-13T07:17:23.681211806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:23.684548 containerd[1819]: time="2025-08-13T07:17:23.684460649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 07:17:23.689496 containerd[1819]: time="2025-08-13T07:17:23.688976507Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:23.696216 containerd[1819]: time="2025-08-13T07:17:23.696136800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:23.698351 containerd[1819]: time="2025-08-13T07:17:23.698309028Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.750772203s" Aug 13 07:17:23.698531 containerd[1819]: time="2025-08-13T07:17:23.698509331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 07:17:23.702617 containerd[1819]: time="2025-08-13T07:17:23.701759973Z" level=info msg="CreateContainer within sandbox \"e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 07:17:23.740137 containerd[1819]: time="2025-08-13T07:17:23.740079970Z" level=info msg="CreateContainer within sandbox \"e20f90f68bf42e48f37e1bbedbcf790999b0aeaf1cc91ef87739031a6de1dd21\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"af65217949da9061dd0aa70cc1d35cc44c3c291a81ed1b6310c07ee41e84ed4d\"" Aug 13 07:17:23.740788 containerd[1819]: time="2025-08-13T07:17:23.740752878Z" level=info msg="StartContainer for \"af65217949da9061dd0aa70cc1d35cc44c3c291a81ed1b6310c07ee41e84ed4d\"" Aug 13 07:17:23.824817 containerd[1819]: time="2025-08-13T07:17:23.824652066Z" level=info msg="StartContainer for \"af65217949da9061dd0aa70cc1d35cc44c3c291a81ed1b6310c07ee41e84ed4d\" returns successfully" Aug 13 07:17:23.876276 kubelet[3358]: I0813 07:17:23.876187 3358 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 07:17:23.876276 kubelet[3358]: I0813 07:17:23.876283 3358 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 07:17:24.240278 kubelet[3358]: I0813 07:17:24.238030 3358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-spflk" podStartSLOduration=28.227789634 podStartE2EDuration="46.238005327s" podCreationTimestamp="2025-08-13 07:16:38 +0000 UTC" firstStartedPulling="2025-08-13 07:17:05.68927515 +0000 UTC m=+49.271778844" lastFinishedPulling="2025-08-13 07:17:23.699490843 +0000 UTC m=+67.281994537" observedRunningTime="2025-08-13 07:17:24.237771724 +0000 UTC m=+67.820275518" watchObservedRunningTime="2025-08-13 07:17:24.238005327 +0000 UTC m=+67.820509021" Aug 13 07:17:28.688173 kubelet[3358]: I0813 07:17:28.687131 3358 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:18:21.287532 systemd[1]: run-containerd-runc-k8s.io-ece5d3865bfa40bfa75cbb52b47bb3d54ae101a27abc631bb2fb08620f7b0747-runc.1lT2Cr.mount: Deactivated successfully. Aug 13 07:18:28.003975 systemd[1]: run-containerd-runc-k8s.io-cdbf8a2ded046abc57255537f35ab585fbceff24fd5915bf4ffdc33f7d2b7f78-runc.joU3qb.mount: Deactivated successfully. Aug 13 07:18:58.005803 systemd[1]: run-containerd-runc-k8s.io-cdbf8a2ded046abc57255537f35ab585fbceff24fd5915bf4ffdc33f7d2b7f78-runc.9fxuFG.mount: Deactivated successfully. Aug 13 07:19:10.016507 systemd[1]: Started sshd@7-10.200.4.34:22-10.200.16.10:38646.service - OpenSSH per-connection server daemon (10.200.16.10:38646). Aug 13 07:19:10.617174 sshd[6824]: Accepted publickey for core from 10.200.16.10 port 38646 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:10.617645 sshd[6824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:10.624798 systemd-logind[1793]: New session 10 of user core. Aug 13 07:19:10.629996 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:19:11.243440 sshd[6824]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:11.250460 systemd[1]: sshd@7-10.200.4.34:22-10.200.16.10:38646.service: Deactivated successfully. Aug 13 07:19:11.255733 systemd-logind[1793]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:19:11.256695 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:19:11.258360 systemd-logind[1793]: Removed session 10. Aug 13 07:19:16.346747 systemd[1]: Started sshd@8-10.200.4.34:22-10.200.16.10:53488.service - OpenSSH per-connection server daemon (10.200.16.10:53488). Aug 13 07:19:16.938376 sshd[6839]: Accepted publickey for core from 10.200.16.10 port 53488 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:16.939917 sshd[6839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:16.944281 systemd-logind[1793]: New session 11 of user core. Aug 13 07:19:16.951623 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:19:17.431477 sshd[6839]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:17.434627 systemd[1]: sshd@8-10.200.4.34:22-10.200.16.10:53488.service: Deactivated successfully. Aug 13 07:19:17.439760 systemd-logind[1793]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:19:17.440684 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:19:17.443360 systemd-logind[1793]: Removed session 11. Aug 13 07:19:22.534505 systemd[1]: Started sshd@9-10.200.4.34:22-10.200.16.10:43984.service - OpenSSH per-connection server daemon (10.200.16.10:43984). Aug 13 07:19:23.117398 sshd[6916]: Accepted publickey for core from 10.200.16.10 port 43984 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:23.119638 sshd[6916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:23.124031 systemd-logind[1793]: New session 12 of user core. Aug 13 07:19:23.129538 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:19:23.614104 sshd[6916]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:23.618395 systemd[1]: sshd@9-10.200.4.34:22-10.200.16.10:43984.service: Deactivated successfully. Aug 13 07:19:23.623965 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:19:23.624892 systemd-logind[1793]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:19:23.625877 systemd-logind[1793]: Removed session 12. Aug 13 07:19:23.717731 systemd[1]: Started sshd@10-10.200.4.34:22-10.200.16.10:43990.service - OpenSSH per-connection server daemon (10.200.16.10:43990). Aug 13 07:19:24.316794 sshd[6933]: Accepted publickey for core from 10.200.16.10 port 43990 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:24.318772 sshd[6933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:24.323594 systemd-logind[1793]: New session 13 of user core. Aug 13 07:19:24.327512 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:19:24.841714 sshd[6933]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:24.847387 systemd[1]: sshd@10-10.200.4.34:22-10.200.16.10:43990.service: Deactivated successfully. Aug 13 07:19:24.851819 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:19:24.853117 systemd-logind[1793]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:19:24.854300 systemd-logind[1793]: Removed session 13. Aug 13 07:19:24.944494 systemd[1]: Started sshd@11-10.200.4.34:22-10.200.16.10:44002.service - OpenSSH per-connection server daemon (10.200.16.10:44002). Aug 13 07:19:25.531537 sshd[6945]: Accepted publickey for core from 10.200.16.10 port 44002 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:25.533129 sshd[6945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:25.538015 systemd-logind[1793]: New session 14 of user core. Aug 13 07:19:25.545519 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:19:26.028027 sshd[6945]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:26.031854 systemd[1]: sshd@11-10.200.4.34:22-10.200.16.10:44002.service: Deactivated successfully. Aug 13 07:19:26.037522 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:19:26.038327 systemd-logind[1793]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:19:26.040394 systemd-logind[1793]: Removed session 14. Aug 13 07:19:31.131493 systemd[1]: Started sshd@12-10.200.4.34:22-10.200.16.10:34594.service - OpenSSH per-connection server daemon (10.200.16.10:34594). Aug 13 07:19:31.761643 sshd[6985]: Accepted publickey for core from 10.200.16.10 port 34594 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:31.763579 sshd[6985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:31.768683 systemd-logind[1793]: New session 15 of user core. Aug 13 07:19:31.772061 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:19:32.252064 sshd[6985]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:32.255915 systemd[1]: sshd@12-10.200.4.34:22-10.200.16.10:34594.service: Deactivated successfully. Aug 13 07:19:32.262654 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:19:32.263869 systemd-logind[1793]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:19:32.264980 systemd-logind[1793]: Removed session 15. Aug 13 07:19:37.355606 systemd[1]: Started sshd@13-10.200.4.34:22-10.200.16.10:34602.service - OpenSSH per-connection server daemon (10.200.16.10:34602). Aug 13 07:19:37.945768 sshd[6999]: Accepted publickey for core from 10.200.16.10 port 34602 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:37.947622 sshd[6999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:37.952855 systemd-logind[1793]: New session 16 of user core. Aug 13 07:19:37.958572 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:19:38.433454 sshd[6999]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:38.438804 systemd[1]: sshd@13-10.200.4.34:22-10.200.16.10:34602.service: Deactivated successfully. Aug 13 07:19:38.443915 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:19:38.446036 systemd-logind[1793]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:19:38.447301 systemd-logind[1793]: Removed session 16. Aug 13 07:19:43.544887 systemd[1]: Started sshd@14-10.200.4.34:22-10.200.16.10:33408.service - OpenSSH per-connection server daemon (10.200.16.10:33408). Aug 13 07:19:44.172136 sshd[7038]: Accepted publickey for core from 10.200.16.10 port 33408 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:44.174170 sshd[7038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:44.179271 systemd-logind[1793]: New session 17 of user core. Aug 13 07:19:44.183543 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:19:44.662446 sshd[7038]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:44.666363 systemd[1]: sshd@14-10.200.4.34:22-10.200.16.10:33408.service: Deactivated successfully. Aug 13 07:19:44.672325 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:19:44.673220 systemd-logind[1793]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:19:44.675086 systemd-logind[1793]: Removed session 17. Aug 13 07:19:49.766920 systemd[1]: Started sshd@15-10.200.4.34:22-10.200.16.10:33410.service - OpenSSH per-connection server daemon (10.200.16.10:33410). Aug 13 07:19:50.359210 sshd[7054]: Accepted publickey for core from 10.200.16.10 port 33410 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:50.367127 sshd[7054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:50.385991 systemd-logind[1793]: New session 18 of user core. Aug 13 07:19:50.392212 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:19:50.915797 sshd[7054]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:50.926930 systemd[1]: sshd@15-10.200.4.34:22-10.200.16.10:33410.service: Deactivated successfully. Aug 13 07:19:50.929946 systemd-logind[1793]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:19:50.938598 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:19:50.945288 systemd-logind[1793]: Removed session 18. Aug 13 07:19:51.023571 systemd[1]: Started sshd@16-10.200.4.34:22-10.200.16.10:54344.service - OpenSSH per-connection server daemon (10.200.16.10:54344). Aug 13 07:19:51.326839 systemd[1]: run-containerd-runc-k8s.io-ece5d3865bfa40bfa75cbb52b47bb3d54ae101a27abc631bb2fb08620f7b0747-runc.FW8bV6.mount: Deactivated successfully. Aug 13 07:19:51.619206 sshd[7069]: Accepted publickey for core from 10.200.16.10 port 54344 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:51.621622 sshd[7069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:51.630382 systemd-logind[1793]: New session 19 of user core. Aug 13 07:19:51.636593 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:19:52.195229 sshd[7069]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:52.201764 systemd[1]: sshd@16-10.200.4.34:22-10.200.16.10:54344.service: Deactivated successfully. Aug 13 07:19:52.213424 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:19:52.224239 systemd-logind[1793]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:19:52.226562 systemd-logind[1793]: Removed session 19. Aug 13 07:19:52.306468 systemd[1]: Started sshd@17-10.200.4.34:22-10.200.16.10:54352.service - OpenSSH per-connection server daemon (10.200.16.10:54352). Aug 13 07:19:52.937047 sshd[7120]: Accepted publickey for core from 10.200.16.10 port 54352 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:52.937621 sshd[7120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:52.945059 systemd-logind[1793]: New session 20 of user core. Aug 13 07:19:52.949977 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:19:56.135415 sshd[7120]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:56.141384 systemd-logind[1793]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:19:56.148634 systemd[1]: sshd@17-10.200.4.34:22-10.200.16.10:54352.service: Deactivated successfully. Aug 13 07:19:56.156310 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:19:56.158072 systemd-logind[1793]: Removed session 20. Aug 13 07:19:56.241877 systemd[1]: Started sshd@18-10.200.4.34:22-10.200.16.10:54368.service - OpenSSH per-connection server daemon (10.200.16.10:54368). Aug 13 07:19:56.840673 sshd[7141]: Accepted publickey for core from 10.200.16.10 port 54368 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:56.841613 sshd[7141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:56.853681 systemd-logind[1793]: New session 21 of user core. Aug 13 07:19:56.857703 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:19:57.454062 sshd[7141]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:57.457622 systemd[1]: sshd@18-10.200.4.34:22-10.200.16.10:54368.service: Deactivated successfully. Aug 13 07:19:57.464195 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:19:57.465473 systemd-logind[1793]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:19:57.466521 systemd-logind[1793]: Removed session 21. Aug 13 07:19:57.557593 systemd[1]: Started sshd@19-10.200.4.34:22-10.200.16.10:54376.service - OpenSSH per-connection server daemon (10.200.16.10:54376). Aug 13 07:19:58.140995 sshd[7153]: Accepted publickey for core from 10.200.16.10 port 54376 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:58.142595 sshd[7153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:58.147655 systemd-logind[1793]: New session 22 of user core. Aug 13 07:19:58.152588 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:19:58.625499 sshd[7153]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:58.631268 systemd[1]: sshd@19-10.200.4.34:22-10.200.16.10:54376.service: Deactivated successfully. Aug 13 07:19:58.635873 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:19:58.636867 systemd-logind[1793]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:19:58.638013 systemd-logind[1793]: Removed session 22. Aug 13 07:20:03.728502 systemd[1]: Started sshd@20-10.200.4.34:22-10.200.16.10:39986.service - OpenSSH per-connection server daemon (10.200.16.10:39986). Aug 13 07:20:04.312927 sshd[7190]: Accepted publickey for core from 10.200.16.10 port 39986 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:20:04.314576 sshd[7190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:04.323443 systemd-logind[1793]: New session 23 of user core. Aug 13 07:20:04.326559 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:20:04.798201 sshd[7190]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:04.802394 systemd[1]: sshd@20-10.200.4.34:22-10.200.16.10:39986.service: Deactivated successfully. Aug 13 07:20:04.809824 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:20:04.811056 systemd-logind[1793]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:20:04.812098 systemd-logind[1793]: Removed session 23. Aug 13 07:20:09.902520 systemd[1]: Started sshd@21-10.200.4.34:22-10.200.16.10:39996.service - OpenSSH per-connection server daemon (10.200.16.10:39996). Aug 13 07:20:10.495966 sshd[7210]: Accepted publickey for core from 10.200.16.10 port 39996 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:20:10.497827 sshd[7210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:10.503490 systemd-logind[1793]: New session 24 of user core. Aug 13 07:20:10.507495 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 07:20:10.985607 sshd[7210]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:10.990085 systemd[1]: sshd@21-10.200.4.34:22-10.200.16.10:39996.service: Deactivated successfully. Aug 13 07:20:10.995577 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 07:20:10.996480 systemd-logind[1793]: Session 24 logged out. Waiting for processes to exit. Aug 13 07:20:10.997541 systemd-logind[1793]: Removed session 24. Aug 13 07:20:16.092165 systemd[1]: Started sshd@22-10.200.4.34:22-10.200.16.10:34690.service - OpenSSH per-connection server daemon (10.200.16.10:34690). Aug 13 07:20:16.682025 sshd[7244]: Accepted publickey for core from 10.200.16.10 port 34690 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:20:16.683617 sshd[7244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:16.688260 systemd-logind[1793]: New session 25 of user core. Aug 13 07:20:16.693515 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 07:20:17.163739 sshd[7244]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:17.167619 systemd[1]: sshd@22-10.200.4.34:22-10.200.16.10:34690.service: Deactivated successfully. Aug 13 07:20:17.174283 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 07:20:17.176483 systemd-logind[1793]: Session 25 logged out. Waiting for processes to exit. Aug 13 07:20:17.177618 systemd-logind[1793]: Removed session 25. Aug 13 07:20:21.295728 systemd[1]: run-containerd-runc-k8s.io-ece5d3865bfa40bfa75cbb52b47bb3d54ae101a27abc631bb2fb08620f7b0747-runc.ukmI8D.mount: Deactivated successfully. Aug 13 07:20:22.268787 systemd[1]: Started sshd@23-10.200.4.34:22-10.200.16.10:44018.service - OpenSSH per-connection server daemon (10.200.16.10:44018). Aug 13 07:20:22.852472 sshd[7324]: Accepted publickey for core from 10.200.16.10 port 44018 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:20:22.854177 sshd[7324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:22.859086 systemd-logind[1793]: New session 26 of user core. Aug 13 07:20:22.866484 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 07:20:23.344010 sshd[7324]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:23.347737 systemd[1]: sshd@23-10.200.4.34:22-10.200.16.10:44018.service: Deactivated successfully. Aug 13 07:20:23.353520 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 07:20:23.354927 systemd-logind[1793]: Session 26 logged out. Waiting for processes to exit. Aug 13 07:20:23.355946 systemd-logind[1793]: Removed session 26. Aug 13 07:20:28.446489 systemd[1]: Started sshd@24-10.200.4.34:22-10.200.16.10:44020.service - OpenSSH per-connection server daemon (10.200.16.10:44020). Aug 13 07:20:29.030355 sshd[7361]: Accepted publickey for core from 10.200.16.10 port 44020 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:20:29.032337 sshd[7361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:29.037647 systemd-logind[1793]: New session 27 of user core. Aug 13 07:20:29.043534 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 07:20:29.522696 sshd[7361]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:29.527970 systemd[1]: sshd@24-10.200.4.34:22-10.200.16.10:44020.service: Deactivated successfully. Aug 13 07:20:29.532867 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 07:20:29.533834 systemd-logind[1793]: Session 27 logged out. Waiting for processes to exit. Aug 13 07:20:29.534811 systemd-logind[1793]: Removed session 27. Aug 13 07:20:34.628729 systemd[1]: Started sshd@25-10.200.4.34:22-10.200.16.10:49880.service - OpenSSH per-connection server daemon (10.200.16.10:49880). Aug 13 07:20:35.216181 sshd[7376]: Accepted publickey for core from 10.200.16.10 port 49880 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:20:35.217755 sshd[7376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:35.222901 systemd-logind[1793]: New session 28 of user core. Aug 13 07:20:35.225917 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 07:20:35.710579 sshd[7376]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:35.714753 systemd[1]: sshd@25-10.200.4.34:22-10.200.16.10:49880.service: Deactivated successfully. Aug 13 07:20:35.720179 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 07:20:35.721122 systemd-logind[1793]: Session 28 logged out. Waiting for processes to exit. Aug 13 07:20:35.722184 systemd-logind[1793]: Removed session 28.