May 17 00:19:59.154456 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:19:59.154517 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:19:59.154534 kernel: BIOS-provided physical RAM map: May 17 00:19:59.154546 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 17 00:19:59.154558 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved May 17 00:19:59.154570 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable May 17 00:19:59.154585 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 May 17 00:19:59.154601 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved May 17 00:19:59.154613 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data May 17 00:19:59.154625 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS May 17 00:19:59.154636 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable May 17 00:19:59.154646 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable May 17 00:19:59.154655 kernel: printk: bootconsole [earlyser0] enabled May 17 00:19:59.154666 kernel: NX (Execute Disable) protection: active May 17 00:19:59.154682 kernel: APIC: Static calls initialized May 17 00:19:59.154693 kernel: efi: EFI v2.7 by Microsoft May 17 00:19:59.154705 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 May 17 00:19:59.154716 kernel: SMBIOS 3.1.0 present. May 17 00:19:59.154728 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 May 17 00:19:59.154740 kernel: Hypervisor detected: Microsoft Hyper-V May 17 00:19:59.154752 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 May 17 00:19:59.154763 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 May 17 00:19:59.154774 kernel: Hyper-V: Nested features: 0x1e0101 May 17 00:19:59.154785 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 May 17 00:19:59.154800 kernel: Hyper-V: Using hypercall for remote TLB flush May 17 00:19:59.154811 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 17 00:19:59.154823 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 17 00:19:59.154836 kernel: tsc: Marking TSC unstable due to running on Hyper-V May 17 00:19:59.154848 kernel: tsc: Detected 2593.909 MHz processor May 17 00:19:59.154860 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:19:59.154872 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:19:59.154884 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 May 17 00:19:59.154897 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 17 00:19:59.154911 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:19:59.154923 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved May 17 00:19:59.154935 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 May 17 00:19:59.154947 kernel: Using GB pages for direct mapping May 17 00:19:59.154960 kernel: Secure boot disabled May 17 00:19:59.154971 kernel: ACPI: Early table checksum verification disabled May 17 00:19:59.154983 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) May 17 00:19:59.155002 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:19:59.155018 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:19:59.155031 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) May 17 00:19:59.155044 kernel: ACPI: FACS 0x000000003FFFE000 000040 May 17 00:19:59.155057 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:19:59.155070 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:19:59.155083 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:19:59.155099 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:19:59.155113 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:19:59.155126 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:19:59.155139 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:19:59.155153 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] May 17 00:19:59.155166 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] May 17 00:19:59.155179 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] May 17 00:19:59.155193 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] May 17 00:19:59.155212 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] May 17 00:19:59.155225 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] May 17 00:19:59.155238 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] May 17 00:19:59.155252 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] May 17 00:19:59.155265 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] May 17 00:19:59.155279 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] May 17 00:19:59.155292 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:19:59.155306 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:19:59.155319 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug May 17 00:19:59.155334 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug May 17 00:19:59.155347 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug May 17 00:19:59.155360 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug May 17 00:19:59.155374 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug May 17 00:19:59.155387 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug May 17 00:19:59.155401 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug May 17 00:19:59.155414 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug May 17 00:19:59.155427 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug May 17 00:19:59.155438 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug May 17 00:19:59.155453 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug May 17 00:19:59.155466 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug May 17 00:19:59.155481 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug May 17 00:19:59.155492 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug May 17 00:19:59.156994 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug May 17 00:19:59.157006 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug May 17 00:19:59.157018 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] May 17 00:19:59.157026 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] May 17 00:19:59.157036 kernel: Zone ranges: May 17 00:19:59.157051 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:19:59.157060 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 00:19:59.157067 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] May 17 00:19:59.157078 kernel: Movable zone start for each node May 17 00:19:59.157086 kernel: Early memory node ranges May 17 00:19:59.157093 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 17 00:19:59.157104 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] May 17 00:19:59.157112 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] May 17 00:19:59.157120 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] May 17 00:19:59.157132 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] May 17 00:19:59.157140 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:19:59.157148 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 17 00:19:59.157158 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges May 17 00:19:59.157166 kernel: ACPI: PM-Timer IO Port: 0x408 May 17 00:19:59.157174 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) May 17 00:19:59.157184 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 May 17 00:19:59.157192 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:19:59.157201 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:19:59.157213 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 May 17 00:19:59.157221 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:19:59.157229 kernel: [mem 0x40000000-0xffffffff] available for PCI devices May 17 00:19:59.157239 kernel: Booting paravirtualized kernel on Hyper-V May 17 00:19:59.157248 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:19:59.157255 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 17 00:19:59.157263 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 17 00:19:59.157274 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 17 00:19:59.157281 kernel: pcpu-alloc: [0] 0 1 May 17 00:19:59.157291 kernel: Hyper-V: PV spinlocks enabled May 17 00:19:59.157302 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:19:59.157311 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:19:59.157319 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:19:59.157328 kernel: random: crng init done May 17 00:19:59.157337 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) May 17 00:19:59.157345 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:19:59.157353 kernel: Fallback order for Node 0: 0 May 17 00:19:59.157366 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 May 17 00:19:59.157383 kernel: Policy zone: Normal May 17 00:19:59.157395 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:19:59.157405 kernel: software IO TLB: area num 2. May 17 00:19:59.157415 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 310124K reserved, 0K cma-reserved) May 17 00:19:59.157423 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:19:59.157434 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:19:59.157442 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:19:59.157454 kernel: Dynamic Preempt: voluntary May 17 00:19:59.157462 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:19:59.157474 kernel: rcu: RCU event tracing is enabled. May 17 00:19:59.157488 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:19:59.157505 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:19:59.157514 kernel: Rude variant of Tasks RCU enabled. May 17 00:19:59.157524 kernel: Tracing variant of Tasks RCU enabled. May 17 00:19:59.157535 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:19:59.157546 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:19:59.157557 kernel: Using NULL legacy PIC May 17 00:19:59.157566 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 May 17 00:19:59.157575 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:19:59.157585 kernel: Console: colour dummy device 80x25 May 17 00:19:59.157593 kernel: printk: console [tty1] enabled May 17 00:19:59.157601 kernel: printk: console [ttyS0] enabled May 17 00:19:59.157612 kernel: printk: bootconsole [earlyser0] disabled May 17 00:19:59.157620 kernel: ACPI: Core revision 20230628 May 17 00:19:59.157628 kernel: Failed to register legacy timer interrupt May 17 00:19:59.157641 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:19:59.157650 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 17 00:19:59.157658 kernel: Hyper-V: Using IPI hypercalls May 17 00:19:59.157667 kernel: APIC: send_IPI() replaced with hv_send_ipi() May 17 00:19:59.157677 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() May 17 00:19:59.157685 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() May 17 00:19:59.157694 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() May 17 00:19:59.157704 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() May 17 00:19:59.157713 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() May 17 00:19:59.157723 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593909) May 17 00:19:59.157735 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 00:19:59.157743 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 00:19:59.157751 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:19:59.157759 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:19:59.157769 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:19:59.157777 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 17 00:19:59.157785 kernel: RETBleed: Vulnerable May 17 00:19:59.157796 kernel: Speculative Store Bypass: Vulnerable May 17 00:19:59.157806 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:19:59.157814 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:19:59.157825 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:19:59.157833 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:19:59.157842 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:19:59.157852 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 17 00:19:59.157860 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 17 00:19:59.157871 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 17 00:19:59.157879 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:19:59.157888 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 May 17 00:19:59.157898 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 May 17 00:19:59.157909 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 May 17 00:19:59.157919 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. May 17 00:19:59.157927 kernel: Freeing SMP alternatives memory: 32K May 17 00:19:59.157935 kernel: pid_max: default: 32768 minimum: 301 May 17 00:19:59.157946 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:19:59.157955 kernel: landlock: Up and running. May 17 00:19:59.157966 kernel: SELinux: Initializing. May 17 00:19:59.157976 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:19:59.157985 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:19:59.157996 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) May 17 00:19:59.158005 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:19:59.158018 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:19:59.158028 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:19:59.158036 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 17 00:19:59.158046 kernel: signal: max sigframe size: 3632 May 17 00:19:59.158055 kernel: rcu: Hierarchical SRCU implementation. May 17 00:19:59.158063 kernel: rcu: Max phase no-delay instances is 400. May 17 00:19:59.158075 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:19:59.158083 kernel: smp: Bringing up secondary CPUs ... May 17 00:19:59.158091 kernel: smpboot: x86: Booting SMP configuration: May 17 00:19:59.158104 kernel: .... node #0, CPUs: #1 May 17 00:19:59.158113 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. May 17 00:19:59.158124 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 00:19:59.158133 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:19:59.158141 kernel: smpboot: Max logical packages: 1 May 17 00:19:59.158150 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) May 17 00:19:59.158160 kernel: devtmpfs: initialized May 17 00:19:59.158168 kernel: x86/mm: Memory block size: 128MB May 17 00:19:59.158178 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) May 17 00:19:59.158189 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:19:59.158198 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:19:59.158209 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:19:59.158217 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:19:59.158226 kernel: audit: initializing netlink subsys (disabled) May 17 00:19:59.158237 kernel: audit: type=2000 audit(1747441198.028:1): state=initialized audit_enabled=0 res=1 May 17 00:19:59.158245 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:19:59.158255 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:19:59.158264 kernel: cpuidle: using governor menu May 17 00:19:59.158275 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:19:59.158286 kernel: dca service started, version 1.12.1 May 17 00:19:59.158294 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] May 17 00:19:59.158304 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:19:59.158314 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:19:59.158327 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:19:59.158336 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:19:59.158350 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:19:59.158367 kernel: ACPI: Added _OSI(Module Device) May 17 00:19:59.158382 kernel: ACPI: Added _OSI(Processor Device) May 17 00:19:59.158396 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:19:59.158411 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:19:59.158426 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:19:59.158441 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:19:59.158457 kernel: ACPI: Interpreter enabled May 17 00:19:59.158470 kernel: ACPI: PM: (supports S0 S5) May 17 00:19:59.158486 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:19:59.158510 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:19:59.158528 kernel: PCI: Ignoring E820 reservations for host bridge windows May 17 00:19:59.158543 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F May 17 00:19:59.158559 kernel: iommu: Default domain type: Translated May 17 00:19:59.158574 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:19:59.158590 kernel: efivars: Registered efivars operations May 17 00:19:59.158604 kernel: PCI: Using ACPI for IRQ routing May 17 00:19:59.158618 kernel: PCI: System does not support PCI May 17 00:19:59.158633 kernel: vgaarb: loaded May 17 00:19:59.158648 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page May 17 00:19:59.158667 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:19:59.158683 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:19:59.158698 kernel: pnp: PnP ACPI init May 17 00:19:59.158713 kernel: pnp: PnP ACPI: found 3 devices May 17 00:19:59.158729 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:19:59.158744 kernel: NET: Registered PF_INET protocol family May 17 00:19:59.158760 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:19:59.158775 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) May 17 00:19:59.158790 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:19:59.158809 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:19:59.158824 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 00:19:59.158839 kernel: TCP: Hash tables configured (established 65536 bind 65536) May 17 00:19:59.158854 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) May 17 00:19:59.158870 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) May 17 00:19:59.158885 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:19:59.158900 kernel: NET: Registered PF_XDP protocol family May 17 00:19:59.158916 kernel: PCI: CLS 0 bytes, default 64 May 17 00:19:59.158934 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 00:19:59.158949 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) May 17 00:19:59.158965 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:19:59.158979 kernel: Initialise system trusted keyrings May 17 00:19:59.158994 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 May 17 00:19:59.159009 kernel: Key type asymmetric registered May 17 00:19:59.159024 kernel: Asymmetric key parser 'x509' registered May 17 00:19:59.159039 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:19:59.159054 kernel: io scheduler mq-deadline registered May 17 00:19:59.159073 kernel: io scheduler kyber registered May 17 00:19:59.159088 kernel: io scheduler bfq registered May 17 00:19:59.159104 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:19:59.159119 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:19:59.159135 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:19:59.159150 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 17 00:19:59.159165 kernel: i8042: PNP: No PS/2 controller found. May 17 00:19:59.159390 kernel: rtc_cmos 00:02: registered as rtc0 May 17 00:19:59.159549 kernel: rtc_cmos 00:02: setting system clock to 2025-05-17T00:19:58 UTC (1747441198) May 17 00:19:59.159670 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram May 17 00:19:59.159689 kernel: intel_pstate: CPU model not supported May 17 00:19:59.159705 kernel: efifb: probing for efifb May 17 00:19:59.159720 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 17 00:19:59.159736 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 17 00:19:59.159752 kernel: efifb: scrolling: redraw May 17 00:19:59.159767 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:19:59.159782 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:19:59.159802 kernel: fb0: EFI VGA frame buffer device May 17 00:19:59.159817 kernel: pstore: Using crash dump compression: deflate May 17 00:19:59.159833 kernel: pstore: Registered efi_pstore as persistent store backend May 17 00:19:59.159848 kernel: NET: Registered PF_INET6 protocol family May 17 00:19:59.159863 kernel: Segment Routing with IPv6 May 17 00:19:59.159879 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:19:59.159894 kernel: NET: Registered PF_PACKET protocol family May 17 00:19:59.159910 kernel: Key type dns_resolver registered May 17 00:19:59.159925 kernel: IPI shorthand broadcast: enabled May 17 00:19:59.159944 kernel: sched_clock: Marking stable (1077003200, 62452100)->(1399638800, -260183500) May 17 00:19:59.159959 kernel: registered taskstats version 1 May 17 00:19:59.159974 kernel: Loading compiled-in X.509 certificates May 17 00:19:59.159990 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:19:59.160005 kernel: Key type .fscrypt registered May 17 00:19:59.160019 kernel: Key type fscrypt-provisioning registered May 17 00:19:59.160035 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:19:59.160050 kernel: ima: Allocated hash algorithm: sha1 May 17 00:19:59.160065 kernel: ima: No architecture policies found May 17 00:19:59.160084 kernel: clk: Disabling unused clocks May 17 00:19:59.160100 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:19:59.160115 kernel: Write protecting the kernel read-only data: 36864k May 17 00:19:59.160131 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:19:59.160146 kernel: Run /init as init process May 17 00:19:59.160161 kernel: with arguments: May 17 00:19:59.160176 kernel: /init May 17 00:19:59.160191 kernel: with environment: May 17 00:19:59.160206 kernel: HOME=/ May 17 00:19:59.160224 kernel: TERM=linux May 17 00:19:59.160239 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:19:59.160257 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:19:59.160276 systemd[1]: Detected virtualization microsoft. May 17 00:19:59.160293 systemd[1]: Detected architecture x86-64. May 17 00:19:59.160308 systemd[1]: Running in initrd. May 17 00:19:59.160324 systemd[1]: No hostname configured, using default hostname. May 17 00:19:59.160340 systemd[1]: Hostname set to . May 17 00:19:59.160360 systemd[1]: Initializing machine ID from random generator. May 17 00:19:59.160376 systemd[1]: Queued start job for default target initrd.target. May 17 00:19:59.160392 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:19:59.160408 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:19:59.160425 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:19:59.160441 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:19:59.160457 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:19:59.160473 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:19:59.160490 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:19:59.164635 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:19:59.164664 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:19:59.164681 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:19:59.164696 systemd[1]: Reached target paths.target - Path Units. May 17 00:19:59.164712 systemd[1]: Reached target slices.target - Slice Units. May 17 00:19:59.164735 systemd[1]: Reached target swap.target - Swaps. May 17 00:19:59.164751 systemd[1]: Reached target timers.target - Timer Units. May 17 00:19:59.164766 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:19:59.164781 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:19:59.164797 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:19:59.164812 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:19:59.164829 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:19:59.164843 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:19:59.164859 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:19:59.164878 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:19:59.164894 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:19:59.164911 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:19:59.164927 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:19:59.164941 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:19:59.164954 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:19:59.164966 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:19:59.165019 systemd-journald[176]: Collecting audit messages is disabled. May 17 00:19:59.165047 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:19:59.165056 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:19:59.165068 systemd-journald[176]: Journal started May 17 00:19:59.165102 systemd-journald[176]: Runtime Journal (/run/log/journal/2878b21214b94c0ab1f4ad61cb9b34be) is 8.0M, max 158.8M, 150.8M free. May 17 00:19:59.171021 systemd-modules-load[177]: Inserted module 'overlay' May 17 00:19:59.178679 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:19:59.184038 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:19:59.191945 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:19:59.216643 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:19:59.219415 systemd-modules-load[177]: Inserted module 'br_netfilter' May 17 00:19:59.222128 kernel: Bridge firewalling registered May 17 00:19:59.222379 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:19:59.230740 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:19:59.234524 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:19:59.241894 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:19:59.248935 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:19:59.255579 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:19:59.269716 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:19:59.280765 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:19:59.287859 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:19:59.296457 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:19:59.310867 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:19:59.318891 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:19:59.326420 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:19:59.338800 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:19:59.347670 dracut-cmdline[209]: dracut-dracut-053 May 17 00:19:59.352207 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:19:59.408469 systemd-resolved[214]: Positive Trust Anchors: May 17 00:19:59.411482 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:19:59.416536 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:19:59.440102 systemd-resolved[214]: Defaulting to hostname 'linux'. May 17 00:19:59.444412 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:19:59.447772 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:19:59.460521 kernel: SCSI subsystem initialized May 17 00:19:59.471525 kernel: Loading iSCSI transport class v2.0-870. May 17 00:19:59.482525 kernel: iscsi: registered transport (tcp) May 17 00:19:59.504855 kernel: iscsi: registered transport (qla4xxx) May 17 00:19:59.504955 kernel: QLogic iSCSI HBA Driver May 17 00:19:59.542744 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:19:59.553755 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:19:59.584373 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:19:59.584494 kernel: device-mapper: uevent: version 1.0.3 May 17 00:19:59.588434 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:19:59.636535 kernel: raid6: avx512x4 gen() 17775 MB/s May 17 00:19:59.655514 kernel: raid6: avx512x2 gen() 17903 MB/s May 17 00:19:59.674513 kernel: raid6: avx512x1 gen() 17771 MB/s May 17 00:19:59.693520 kernel: raid6: avx2x4 gen() 17768 MB/s May 17 00:19:59.712513 kernel: raid6: avx2x2 gen() 17755 MB/s May 17 00:19:59.732849 kernel: raid6: avx2x1 gen() 13429 MB/s May 17 00:19:59.732897 kernel: raid6: using algorithm avx512x2 gen() 17903 MB/s May 17 00:19:59.756545 kernel: raid6: .... xor() 28062 MB/s, rmw enabled May 17 00:19:59.756689 kernel: raid6: using avx512x2 recovery algorithm May 17 00:19:59.782089 kernel: xor: automatically using best checksumming function avx May 17 00:19:59.934531 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:19:59.945041 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:19:59.957721 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:19:59.970725 systemd-udevd[397]: Using default interface naming scheme 'v255'. May 17 00:19:59.975347 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:19:59.990763 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:20:00.004776 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation May 17 00:20:00.037120 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:20:00.050731 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:20:00.095785 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:20:00.115496 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:20:00.145885 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:20:00.155388 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:20:00.163175 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:20:00.169937 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:20:00.181747 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:20:00.208522 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:20:00.212136 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:20:00.238567 kernel: hv_vmbus: Vmbus version:5.2 May 17 00:20:00.257548 kernel: hv_vmbus: registering driver hyperv_keyboard May 17 00:20:00.263517 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 17 00:20:00.263587 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:20:00.268469 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:20:00.268670 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:20:00.283535 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:20:00.283616 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:20:00.286284 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:20:00.295934 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:20:00.296341 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:00.308705 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:20:00.320271 kernel: AES CTR mode by8 optimization enabled May 17 00:20:00.320348 kernel: hv_vmbus: registering driver hv_storvsc May 17 00:20:00.325739 kernel: scsi host0: storvsc_host_t May 17 00:20:00.325989 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:20:00.326012 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 17 00:20:00.329920 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:20:00.338792 kernel: scsi host1: storvsc_host_t May 17 00:20:00.339025 kernel: hv_vmbus: registering driver hv_netvsc May 17 00:20:00.348316 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 17 00:20:00.360273 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:20:00.362330 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:00.375519 kernel: hv_vmbus: registering driver hid_hyperv May 17 00:20:00.375578 kernel: PTP clock support registered May 17 00:20:00.383844 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:20:00.396716 kernel: hv_utils: Registering HyperV Utility Driver May 17 00:20:00.396788 kernel: hv_vmbus: registering driver hv_utils May 17 00:20:00.399079 kernel: hv_utils: Heartbeat IC version 3.0 May 17 00:20:00.399148 kernel: hv_utils: Shutdown IC version 3.2 May 17 00:20:00.399513 kernel: hv_utils: TimeSync IC version 4.0 May 17 00:20:01.263034 systemd-resolved[214]: Clock change detected. Flushing caches. May 17 00:20:01.299668 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 17 00:20:01.299695 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 17 00:20:01.299881 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 17 00:20:01.300027 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:20:01.300047 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 17 00:20:01.317714 kernel: hv_netvsc 7ced8d2d-de81-7ced-8d2d-de817ced8d2d eth0: VF slot 1 added May 17 00:20:01.339219 kernel: hv_vmbus: registering driver hv_pci May 17 00:20:01.344254 kernel: hv_pci 941af122-800d-4636-ba07-afe5909703f0: PCI VMBus probing: Using version 0x10004 May 17 00:20:01.347053 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:01.352867 kernel: hv_pci 941af122-800d-4636-ba07-afe5909703f0: PCI host bridge to bus 800d:00 May 17 00:20:01.365571 kernel: pci_bus 800d:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] May 17 00:20:01.365994 kernel: pci_bus 800d:00: No busn resource found for root bus, will use [bus 00-ff] May 17 00:20:01.366538 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:20:01.380115 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 17 00:20:01.380500 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 17 00:20:01.383281 kernel: pci 800d:00:02.0: [15b3:1016] type 00 class 0x020000 May 17 00:20:01.394763 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:20:01.395042 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 17 00:20:01.395151 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 17 00:20:01.400257 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:20:01.404220 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:20:01.513236 kernel: pci 800d:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] May 17 00:20:01.518235 kernel: pci 800d:00:02.0: enabling Extended Tags May 17 00:20:01.534283 kernel: pci 800d:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 800d:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 00:20:01.542029 kernel: pci_bus 800d:00: busn_res: [bus 00-ff] end is updated to 00 May 17 00:20:01.542452 kernel: pci 800d:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] May 17 00:20:01.569421 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:20:01.631251 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (447) May 17 00:20:01.645790 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 17 00:20:01.713537 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 17 00:20:01.766257 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (467) May 17 00:20:01.789717 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 17 00:20:01.815549 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 17 00:20:01.875237 kernel: mlx5_core 800d:00:02.0: enabling device (0000 -> 0002) May 17 00:20:01.875763 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:20:01.894216 kernel: mlx5_core 800d:00:02.0: firmware version: 14.30.5000 May 17 00:20:01.904257 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:20:01.915214 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:20:02.134487 kernel: hv_netvsc 7ced8d2d-de81-7ced-8d2d-de817ced8d2d eth0: VF registering: eth1 May 17 00:20:02.136180 kernel: mlx5_core 800d:00:02.0 eth1: joined to eth0 May 17 00:20:02.151215 kernel: mlx5_core 800d:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 00:20:02.171223 kernel: mlx5_core 800d:00:02.0 enP32781s1: renamed from eth1 May 17 00:20:02.239380 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 17 00:20:02.917134 disk-uuid[597]: The operation has completed successfully. May 17 00:20:02.919957 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:20:02.996831 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:20:02.996957 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:20:03.027411 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:20:03.034847 sh[690]: Success May 17 00:20:03.055246 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:20:03.272992 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:20:03.287360 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:20:03.293685 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:20:03.314455 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:20:03.315417 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:20:03.321021 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:20:03.324326 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:20:03.327202 kernel: BTRFS info (device dm-0): using free space tree May 17 00:20:03.867833 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:20:03.872049 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:20:03.881512 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:20:03.887366 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:20:03.908216 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:03.914454 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:20:03.914563 kernel: BTRFS info (device sda6): using free space tree May 17 00:20:03.961224 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:20:03.971798 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:20:03.978835 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:03.989013 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:20:03.997221 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:20:04.005526 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:20:04.014403 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:20:04.040578 systemd-networkd[874]: lo: Link UP May 17 00:20:04.040592 systemd-networkd[874]: lo: Gained carrier May 17 00:20:04.042994 systemd-networkd[874]: Enumeration completed May 17 00:20:04.043113 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:20:04.046665 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:20:04.046669 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:20:04.047839 systemd[1]: Reached target network.target - Network. May 17 00:20:04.108224 kernel: mlx5_core 800d:00:02.0 enP32781s1: Link up May 17 00:20:04.158228 kernel: hv_netvsc 7ced8d2d-de81-7ced-8d2d-de817ced8d2d eth0: Data path switched to VF: enP32781s1 May 17 00:20:04.159672 systemd-networkd[874]: enP32781s1: Link UP May 17 00:20:04.159855 systemd-networkd[874]: eth0: Link UP May 17 00:20:04.160087 systemd-networkd[874]: eth0: Gained carrier May 17 00:20:04.160105 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:20:04.167519 systemd-networkd[874]: enP32781s1: Gained carrier May 17 00:20:04.204286 systemd-networkd[874]: eth0: DHCPv4 address 10.200.8.41/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 17 00:20:05.424501 systemd-networkd[874]: enP32781s1: Gained IPv6LL May 17 00:20:05.488581 systemd-networkd[874]: eth0: Gained IPv6LL May 17 00:20:05.709624 ignition[873]: Ignition 2.19.0 May 17 00:20:05.709637 ignition[873]: Stage: fetch-offline May 17 00:20:05.709686 ignition[873]: no configs at "/usr/lib/ignition/base.d" May 17 00:20:05.709699 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:20:05.709814 ignition[873]: parsed url from cmdline: "" May 17 00:20:05.709818 ignition[873]: no config URL provided May 17 00:20:05.709834 ignition[873]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:20:05.709846 ignition[873]: no config at "/usr/lib/ignition/user.ign" May 17 00:20:05.709854 ignition[873]: failed to fetch config: resource requires networking May 17 00:20:05.710103 ignition[873]: Ignition finished successfully May 17 00:20:05.734964 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:20:05.747453 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:20:05.766704 ignition[882]: Ignition 2.19.0 May 17 00:20:05.766717 ignition[882]: Stage: fetch May 17 00:20:05.766963 ignition[882]: no configs at "/usr/lib/ignition/base.d" May 17 00:20:05.766977 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:20:05.767100 ignition[882]: parsed url from cmdline: "" May 17 00:20:05.767105 ignition[882]: no config URL provided May 17 00:20:05.767111 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:20:05.767119 ignition[882]: no config at "/usr/lib/ignition/user.ign" May 17 00:20:05.767141 ignition[882]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 17 00:20:05.840905 ignition[882]: GET result: OK May 17 00:20:05.840997 ignition[882]: config has been read from IMDS userdata May 17 00:20:05.844768 unknown[882]: fetched base config from "system" May 17 00:20:05.841028 ignition[882]: parsing config with SHA512: 9c66a48eb3cc26a26e5885c49e491e75c6bd9f473796ad0c7afc430ccbfb03a6e94942eb7e2583ef1f30287255efaf970e6a2b08261c8c505f7e5e4f83bd5670 May 17 00:20:05.844776 unknown[882]: fetched base config from "system" May 17 00:20:05.845124 ignition[882]: fetch: fetch complete May 17 00:20:05.844782 unknown[882]: fetched user config from "azure" May 17 00:20:05.845130 ignition[882]: fetch: fetch passed May 17 00:20:05.847007 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:20:05.845176 ignition[882]: Ignition finished successfully May 17 00:20:05.860454 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:20:05.879356 ignition[888]: Ignition 2.19.0 May 17 00:20:05.879367 ignition[888]: Stage: kargs May 17 00:20:05.879609 ignition[888]: no configs at "/usr/lib/ignition/base.d" May 17 00:20:05.879624 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:20:05.880537 ignition[888]: kargs: kargs passed May 17 00:20:05.880590 ignition[888]: Ignition finished successfully May 17 00:20:05.892850 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:20:05.902500 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:20:05.920725 ignition[894]: Ignition 2.19.0 May 17 00:20:05.920738 ignition[894]: Stage: disks May 17 00:20:05.920995 ignition[894]: no configs at "/usr/lib/ignition/base.d" May 17 00:20:05.923308 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:20:05.921011 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:20:05.927683 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:20:05.921964 ignition[894]: disks: disks passed May 17 00:20:05.932412 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:20:05.922024 ignition[894]: Ignition finished successfully May 17 00:20:05.935546 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:20:05.940373 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:20:05.943573 systemd[1]: Reached target basic.target - Basic System. May 17 00:20:05.956396 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:20:06.028179 systemd-fsck[902]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks May 17 00:20:06.034501 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:20:06.051477 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:20:06.148221 kernel: EXT4-fs (sda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:20:06.149632 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:20:06.153033 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:20:06.193363 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:20:06.198096 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:20:06.209278 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (913) May 17 00:20:06.218860 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:06.218990 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:20:06.219012 kernel: BTRFS info (device sda6): using free space tree May 17 00:20:06.215576 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 00:20:06.224175 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:20:06.224795 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:20:06.227348 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:20:06.253555 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:20:06.256603 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:20:06.266834 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:20:07.237043 coreos-metadata[915]: May 17 00:20:07.236 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 17 00:20:07.244313 coreos-metadata[915]: May 17 00:20:07.244 INFO Fetch successful May 17 00:20:07.247277 coreos-metadata[915]: May 17 00:20:07.245 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 17 00:20:07.262541 coreos-metadata[915]: May 17 00:20:07.262 INFO Fetch successful May 17 00:20:07.271213 coreos-metadata[915]: May 17 00:20:07.269 INFO wrote hostname ci-4081.3.3-n-4e81e33f0f to /sysroot/etc/hostname May 17 00:20:07.279028 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:20:07.277837 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:20:07.334033 initrd-setup-root[950]: cut: /sysroot/etc/group: No such file or directory May 17 00:20:07.366271 initrd-setup-root[957]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:20:07.395374 initrd-setup-root[964]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:20:08.299616 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:20:08.315449 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:20:08.323406 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:20:08.355976 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:08.353761 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:20:08.381476 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:20:08.403183 ignition[1035]: INFO : Ignition 2.19.0 May 17 00:20:08.403183 ignition[1035]: INFO : Stage: mount May 17 00:20:08.421548 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:20:08.421548 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:20:08.421548 ignition[1035]: INFO : mount: mount passed May 17 00:20:08.421548 ignition[1035]: INFO : Ignition finished successfully May 17 00:20:08.405878 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:20:08.427399 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:20:08.454430 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:20:08.481221 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1043) May 17 00:20:08.481290 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:08.490234 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:20:08.494165 kernel: BTRFS info (device sda6): using free space tree May 17 00:20:08.500234 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:20:08.502646 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:20:08.536285 ignition[1060]: INFO : Ignition 2.19.0 May 17 00:20:08.536285 ignition[1060]: INFO : Stage: files May 17 00:20:08.542859 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:20:08.542859 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:20:08.542859 ignition[1060]: DEBUG : files: compiled without relabeling support, skipping May 17 00:20:08.575370 ignition[1060]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:20:08.575370 ignition[1060]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:20:08.599315 ignition[1060]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:20:08.603558 ignition[1060]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:20:08.603558 ignition[1060]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:20:08.603558 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 17 00:20:08.603558 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 17 00:20:08.599782 unknown[1060]: wrote ssh authorized keys file for user: core May 17 00:20:08.881454 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:20:09.036302 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 17 00:20:09.042133 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:20:09.042133 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:20:09.042133 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:20:09.042133 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:20:09.042133 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:20:09.042133 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:20:09.042133 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:20:09.042133 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:20:09.042133 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:20:09.086472 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:20:09.086472 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:20:09.086472 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:20:09.086472 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:20:09.086472 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 17 00:20:09.920419 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:20:10.239728 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:20:10.239728 ignition[1060]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:20:10.251543 ignition[1060]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:20:10.258269 ignition[1060]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:20:10.258269 ignition[1060]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:20:10.267269 ignition[1060]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 17 00:20:10.267269 ignition[1060]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:20:10.275650 ignition[1060]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:20:10.280554 ignition[1060]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:20:10.285374 ignition[1060]: INFO : files: files passed May 17 00:20:10.285374 ignition[1060]: INFO : Ignition finished successfully May 17 00:20:10.282553 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:20:10.302576 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:20:10.308979 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:20:10.313309 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:20:10.313425 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:20:10.346924 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:20:10.346924 initrd-setup-root-after-ignition[1088]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:20:10.360296 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:20:10.350853 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:20:10.356699 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:20:10.374278 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:20:10.411568 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:20:10.411697 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:20:10.418807 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:20:10.424813 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:20:10.430087 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:20:10.442368 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:20:10.457006 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:20:10.471349 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:20:10.488254 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:20:10.489668 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:20:10.490108 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:20:10.491045 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:20:10.491202 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:20:10.491990 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:20:10.492492 systemd[1]: Stopped target basic.target - Basic System. May 17 00:20:10.492963 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:20:10.493430 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:20:10.493898 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:20:10.494410 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:20:10.494868 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:20:10.495358 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:20:10.495811 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:20:10.496267 systemd[1]: Stopped target swap.target - Swaps. May 17 00:20:10.496750 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:20:10.496872 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:20:10.497689 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:20:10.498149 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:20:10.499104 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:20:10.537993 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:20:10.549177 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:20:10.629080 ignition[1112]: INFO : Ignition 2.19.0 May 17 00:20:10.629080 ignition[1112]: INFO : Stage: umount May 17 00:20:10.629080 ignition[1112]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:20:10.629080 ignition[1112]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:20:10.629080 ignition[1112]: INFO : umount: umount passed May 17 00:20:10.629080 ignition[1112]: INFO : Ignition finished successfully May 17 00:20:10.549370 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:20:10.555348 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:20:10.555521 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:20:10.561279 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:20:10.561432 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:20:10.566378 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:20:10.566526 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:20:10.589569 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:20:10.595710 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:20:10.595963 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:20:10.608301 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:20:10.615627 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:20:10.615880 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:20:10.624077 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:20:10.624467 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:20:10.633216 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:20:10.633355 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:20:10.639500 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:20:10.644500 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:20:10.652467 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:20:10.652567 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:20:10.657702 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:20:10.660537 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:20:10.669957 systemd[1]: Stopped target network.target - Network. May 17 00:20:10.678509 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:20:10.678586 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:20:10.678924 systemd[1]: Stopped target paths.target - Path Units. May 17 00:20:10.679434 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:20:10.692163 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:20:10.695329 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:20:10.702262 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:20:10.707921 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:20:10.707980 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:20:10.721550 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:20:10.724276 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:20:10.729858 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:20:10.729943 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:20:10.735126 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:20:10.735205 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:20:10.741070 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:20:10.746059 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:20:10.757954 systemd-networkd[874]: eth0: DHCPv6 lease lost May 17 00:20:10.763812 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:20:10.806504 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:20:10.812569 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:20:10.822300 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:20:10.822482 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:20:10.833606 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:20:10.833723 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:20:10.841568 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:20:10.841654 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:20:10.865459 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:20:10.868231 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:20:10.868344 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:20:10.872420 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:20:10.872490 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:20:10.881344 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:20:10.881406 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:20:10.881723 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:20:10.881762 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:20:10.882287 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:20:10.915717 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:20:10.915905 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:20:10.920104 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:20:10.920197 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:20:10.947340 kernel: hv_netvsc 7ced8d2d-de81-7ced-8d2d-de817ced8d2d eth0: Data path switched from VF: enP32781s1 May 17 00:20:10.920919 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:20:10.920963 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:20:10.921345 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:20:10.921392 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:20:10.922801 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:20:10.922857 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:20:10.923697 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:20:10.923735 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:20:10.942572 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:20:10.949823 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:20:10.949898 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:20:10.950769 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:20:10.950809 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:10.993683 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:20:10.993818 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:20:11.001895 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:20:11.002015 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:20:11.185942 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:20:11.186113 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:20:11.194263 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:20:11.200166 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:20:11.200281 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:20:11.213400 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:20:11.224170 systemd[1]: Switching root. May 17 00:20:11.258204 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). May 17 00:20:11.258293 systemd-journald[176]: Journal stopped May 17 00:19:59.154456 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:19:59.154517 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:19:59.154534 kernel: BIOS-provided physical RAM map: May 17 00:19:59.154546 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 17 00:19:59.154558 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved May 17 00:19:59.154570 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable May 17 00:19:59.154585 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 May 17 00:19:59.154601 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved May 17 00:19:59.154613 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data May 17 00:19:59.154625 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS May 17 00:19:59.154636 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable May 17 00:19:59.154646 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable May 17 00:19:59.154655 kernel: printk: bootconsole [earlyser0] enabled May 17 00:19:59.154666 kernel: NX (Execute Disable) protection: active May 17 00:19:59.154682 kernel: APIC: Static calls initialized May 17 00:19:59.154693 kernel: efi: EFI v2.7 by Microsoft May 17 00:19:59.154705 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 May 17 00:19:59.154716 kernel: SMBIOS 3.1.0 present. May 17 00:19:59.154728 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 May 17 00:19:59.154740 kernel: Hypervisor detected: Microsoft Hyper-V May 17 00:19:59.154752 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 May 17 00:19:59.154763 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 May 17 00:19:59.154774 kernel: Hyper-V: Nested features: 0x1e0101 May 17 00:19:59.154785 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 May 17 00:19:59.154800 kernel: Hyper-V: Using hypercall for remote TLB flush May 17 00:19:59.154811 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 17 00:19:59.154823 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 17 00:19:59.154836 kernel: tsc: Marking TSC unstable due to running on Hyper-V May 17 00:19:59.154848 kernel: tsc: Detected 2593.909 MHz processor May 17 00:19:59.154860 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:19:59.154872 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:19:59.154884 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 May 17 00:19:59.154897 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 17 00:19:59.154911 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:19:59.154923 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved May 17 00:19:59.154935 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 May 17 00:19:59.154947 kernel: Using GB pages for direct mapping May 17 00:19:59.154960 kernel: Secure boot disabled May 17 00:19:59.154971 kernel: ACPI: Early table checksum verification disabled May 17 00:19:59.154983 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) May 17 00:19:59.155002 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:19:59.155018 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:19:59.155031 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) May 17 00:19:59.155044 kernel: ACPI: FACS 0x000000003FFFE000 000040 May 17 00:19:59.155057 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:19:59.155070 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:19:59.155083 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:19:59.155099 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:19:59.155113 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:19:59.155126 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:19:59.155139 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:19:59.155153 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] May 17 00:19:59.155166 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] May 17 00:19:59.155179 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] May 17 00:19:59.155193 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] May 17 00:19:59.155212 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] May 17 00:19:59.155225 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] May 17 00:19:59.155238 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] May 17 00:19:59.155252 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] May 17 00:19:59.155265 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] May 17 00:19:59.155279 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] May 17 00:19:59.155292 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:19:59.155306 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:19:59.155319 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug May 17 00:19:59.155334 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug May 17 00:19:59.155347 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug May 17 00:19:59.155360 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug May 17 00:19:59.155374 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug May 17 00:19:59.155387 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug May 17 00:19:59.155401 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug May 17 00:19:59.155414 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug May 17 00:19:59.155427 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug May 17 00:19:59.155438 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug May 17 00:19:59.155453 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug May 17 00:19:59.155466 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug May 17 00:19:59.155481 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug May 17 00:19:59.155492 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug May 17 00:19:59.156994 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug May 17 00:19:59.157006 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug May 17 00:19:59.157018 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] May 17 00:19:59.157026 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] May 17 00:19:59.157036 kernel: Zone ranges: May 17 00:19:59.157051 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:19:59.157060 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 00:19:59.157067 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] May 17 00:19:59.157078 kernel: Movable zone start for each node May 17 00:19:59.157086 kernel: Early memory node ranges May 17 00:19:59.157093 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 17 00:19:59.157104 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] May 17 00:19:59.157112 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] May 17 00:19:59.157120 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] May 17 00:19:59.157132 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] May 17 00:19:59.157140 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:19:59.157148 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 17 00:19:59.157158 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges May 17 00:19:59.157166 kernel: ACPI: PM-Timer IO Port: 0x408 May 17 00:19:59.157174 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) May 17 00:19:59.157184 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 May 17 00:19:59.157192 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:19:59.157201 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:19:59.157213 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 May 17 00:19:59.157221 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:19:59.157229 kernel: [mem 0x40000000-0xffffffff] available for PCI devices May 17 00:19:59.157239 kernel: Booting paravirtualized kernel on Hyper-V May 17 00:19:59.157248 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:19:59.157255 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 17 00:19:59.157263 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 17 00:19:59.157274 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 17 00:19:59.157281 kernel: pcpu-alloc: [0] 0 1 May 17 00:19:59.157291 kernel: Hyper-V: PV spinlocks enabled May 17 00:19:59.157302 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:19:59.157311 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:19:59.157319 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:19:59.157328 kernel: random: crng init done May 17 00:19:59.157337 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) May 17 00:19:59.157345 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:19:59.157353 kernel: Fallback order for Node 0: 0 May 17 00:19:59.157366 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 May 17 00:19:59.157383 kernel: Policy zone: Normal May 17 00:19:59.157395 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:19:59.157405 kernel: software IO TLB: area num 2. May 17 00:19:59.157415 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 310124K reserved, 0K cma-reserved) May 17 00:19:59.157423 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:19:59.157434 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:19:59.157442 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:19:59.157454 kernel: Dynamic Preempt: voluntary May 17 00:19:59.157462 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:19:59.157474 kernel: rcu: RCU event tracing is enabled. May 17 00:19:59.157488 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:19:59.157505 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:19:59.157514 kernel: Rude variant of Tasks RCU enabled. May 17 00:19:59.157524 kernel: Tracing variant of Tasks RCU enabled. May 17 00:19:59.157535 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:19:59.157546 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:19:59.157557 kernel: Using NULL legacy PIC May 17 00:19:59.157566 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 May 17 00:19:59.157575 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:19:59.157585 kernel: Console: colour dummy device 80x25 May 17 00:19:59.157593 kernel: printk: console [tty1] enabled May 17 00:19:59.157601 kernel: printk: console [ttyS0] enabled May 17 00:19:59.157612 kernel: printk: bootconsole [earlyser0] disabled May 17 00:19:59.157620 kernel: ACPI: Core revision 20230628 May 17 00:19:59.157628 kernel: Failed to register legacy timer interrupt May 17 00:19:59.157641 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:19:59.157650 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 17 00:19:59.157658 kernel: Hyper-V: Using IPI hypercalls May 17 00:19:59.157667 kernel: APIC: send_IPI() replaced with hv_send_ipi() May 17 00:19:59.157677 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() May 17 00:19:59.157685 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() May 17 00:19:59.157694 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() May 17 00:19:59.157704 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() May 17 00:19:59.157713 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() May 17 00:19:59.157723 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593909) May 17 00:19:59.157735 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 00:19:59.157743 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 00:19:59.157751 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:19:59.157759 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:19:59.157769 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:19:59.157777 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 17 00:19:59.157785 kernel: RETBleed: Vulnerable May 17 00:19:59.157796 kernel: Speculative Store Bypass: Vulnerable May 17 00:19:59.157806 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:19:59.157814 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:19:59.157825 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:19:59.157833 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:19:59.157842 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:19:59.157852 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 17 00:19:59.157860 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 17 00:19:59.157871 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 17 00:19:59.157879 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:19:59.157888 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 May 17 00:19:59.157898 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 May 17 00:19:59.157909 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 May 17 00:19:59.157919 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. May 17 00:19:59.157927 kernel: Freeing SMP alternatives memory: 32K May 17 00:19:59.157935 kernel: pid_max: default: 32768 minimum: 301 May 17 00:19:59.157946 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:19:59.157955 kernel: landlock: Up and running. May 17 00:19:59.157966 kernel: SELinux: Initializing. May 17 00:19:59.157976 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:19:59.157985 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:19:59.157996 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) May 17 00:19:59.158005 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:19:59.158018 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:19:59.158028 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:19:59.158036 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 17 00:19:59.158046 kernel: signal: max sigframe size: 3632 May 17 00:19:59.158055 kernel: rcu: Hierarchical SRCU implementation. May 17 00:19:59.158063 kernel: rcu: Max phase no-delay instances is 400. May 17 00:19:59.158075 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:19:59.158083 kernel: smp: Bringing up secondary CPUs ... May 17 00:19:59.158091 kernel: smpboot: x86: Booting SMP configuration: May 17 00:19:59.158104 kernel: .... node #0, CPUs: #1 May 17 00:19:59.158113 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. May 17 00:19:59.158124 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 00:19:59.158133 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:19:59.158141 kernel: smpboot: Max logical packages: 1 May 17 00:19:59.158150 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) May 17 00:19:59.158160 kernel: devtmpfs: initialized May 17 00:19:59.158168 kernel: x86/mm: Memory block size: 128MB May 17 00:19:59.158178 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) May 17 00:19:59.158189 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:19:59.158198 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:19:59.158209 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:19:59.158217 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:19:59.158226 kernel: audit: initializing netlink subsys (disabled) May 17 00:19:59.158237 kernel: audit: type=2000 audit(1747441198.028:1): state=initialized audit_enabled=0 res=1 May 17 00:19:59.158245 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:19:59.158255 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:19:59.158264 kernel: cpuidle: using governor menu May 17 00:19:59.158275 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:19:59.158286 kernel: dca service started, version 1.12.1 May 17 00:19:59.158294 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] May 17 00:19:59.158304 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:19:59.158314 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:19:59.158327 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:19:59.158336 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:19:59.158350 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:19:59.158367 kernel: ACPI: Added _OSI(Module Device) May 17 00:19:59.158382 kernel: ACPI: Added _OSI(Processor Device) May 17 00:19:59.158396 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:19:59.158411 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:19:59.158426 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:19:59.158441 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:19:59.158457 kernel: ACPI: Interpreter enabled May 17 00:19:59.158470 kernel: ACPI: PM: (supports S0 S5) May 17 00:19:59.158486 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:19:59.158510 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:19:59.158528 kernel: PCI: Ignoring E820 reservations for host bridge windows May 17 00:19:59.158543 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F May 17 00:19:59.158559 kernel: iommu: Default domain type: Translated May 17 00:19:59.158574 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:19:59.158590 kernel: efivars: Registered efivars operations May 17 00:19:59.158604 kernel: PCI: Using ACPI for IRQ routing May 17 00:19:59.158618 kernel: PCI: System does not support PCI May 17 00:19:59.158633 kernel: vgaarb: loaded May 17 00:19:59.158648 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page May 17 00:19:59.158667 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:19:59.158683 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:19:59.158698 kernel: pnp: PnP ACPI init May 17 00:19:59.158713 kernel: pnp: PnP ACPI: found 3 devices May 17 00:19:59.158729 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:19:59.158744 kernel: NET: Registered PF_INET protocol family May 17 00:19:59.158760 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:19:59.158775 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) May 17 00:19:59.158790 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:19:59.158809 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:19:59.158824 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 00:19:59.158839 kernel: TCP: Hash tables configured (established 65536 bind 65536) May 17 00:19:59.158854 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) May 17 00:19:59.158870 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) May 17 00:19:59.158885 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:19:59.158900 kernel: NET: Registered PF_XDP protocol family May 17 00:19:59.158916 kernel: PCI: CLS 0 bytes, default 64 May 17 00:19:59.158934 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 00:19:59.158949 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) May 17 00:19:59.158965 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:19:59.158979 kernel: Initialise system trusted keyrings May 17 00:19:59.158994 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 May 17 00:19:59.159009 kernel: Key type asymmetric registered May 17 00:19:59.159024 kernel: Asymmetric key parser 'x509' registered May 17 00:19:59.159039 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:19:59.159054 kernel: io scheduler mq-deadline registered May 17 00:19:59.159073 kernel: io scheduler kyber registered May 17 00:19:59.159088 kernel: io scheduler bfq registered May 17 00:19:59.159104 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:19:59.159119 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:19:59.159135 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:19:59.159150 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 17 00:19:59.159165 kernel: i8042: PNP: No PS/2 controller found. May 17 00:19:59.159390 kernel: rtc_cmos 00:02: registered as rtc0 May 17 00:19:59.159549 kernel: rtc_cmos 00:02: setting system clock to 2025-05-17T00:19:58 UTC (1747441198) May 17 00:19:59.159670 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram May 17 00:19:59.159689 kernel: intel_pstate: CPU model not supported May 17 00:19:59.159705 kernel: efifb: probing for efifb May 17 00:19:59.159720 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 17 00:19:59.159736 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 17 00:19:59.159752 kernel: efifb: scrolling: redraw May 17 00:19:59.159767 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:19:59.159782 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:19:59.159802 kernel: fb0: EFI VGA frame buffer device May 17 00:19:59.159817 kernel: pstore: Using crash dump compression: deflate May 17 00:19:59.159833 kernel: pstore: Registered efi_pstore as persistent store backend May 17 00:19:59.159848 kernel: NET: Registered PF_INET6 protocol family May 17 00:19:59.159863 kernel: Segment Routing with IPv6 May 17 00:19:59.159879 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:19:59.159894 kernel: NET: Registered PF_PACKET protocol family May 17 00:19:59.159910 kernel: Key type dns_resolver registered May 17 00:19:59.159925 kernel: IPI shorthand broadcast: enabled May 17 00:19:59.159944 kernel: sched_clock: Marking stable (1077003200, 62452100)->(1399638800, -260183500) May 17 00:19:59.159959 kernel: registered taskstats version 1 May 17 00:19:59.159974 kernel: Loading compiled-in X.509 certificates May 17 00:19:59.159990 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:19:59.160005 kernel: Key type .fscrypt registered May 17 00:19:59.160019 kernel: Key type fscrypt-provisioning registered May 17 00:19:59.160035 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:19:59.160050 kernel: ima: Allocated hash algorithm: sha1 May 17 00:19:59.160065 kernel: ima: No architecture policies found May 17 00:19:59.160084 kernel: clk: Disabling unused clocks May 17 00:19:59.160100 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:19:59.160115 kernel: Write protecting the kernel read-only data: 36864k May 17 00:19:59.160131 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:19:59.160146 kernel: Run /init as init process May 17 00:19:59.160161 kernel: with arguments: May 17 00:19:59.160176 kernel: /init May 17 00:19:59.160191 kernel: with environment: May 17 00:19:59.160206 kernel: HOME=/ May 17 00:19:59.160224 kernel: TERM=linux May 17 00:19:59.160239 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:19:59.160257 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:19:59.160276 systemd[1]: Detected virtualization microsoft. May 17 00:19:59.160293 systemd[1]: Detected architecture x86-64. May 17 00:19:59.160308 systemd[1]: Running in initrd. May 17 00:19:59.160324 systemd[1]: No hostname configured, using default hostname. May 17 00:19:59.160340 systemd[1]: Hostname set to . May 17 00:19:59.160360 systemd[1]: Initializing machine ID from random generator. May 17 00:19:59.160376 systemd[1]: Queued start job for default target initrd.target. May 17 00:19:59.160392 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:19:59.160408 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:19:59.160425 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:19:59.160441 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:19:59.160457 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:19:59.160473 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:19:59.160490 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:19:59.164635 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:19:59.164664 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:19:59.164681 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:19:59.164696 systemd[1]: Reached target paths.target - Path Units. May 17 00:19:59.164712 systemd[1]: Reached target slices.target - Slice Units. May 17 00:19:59.164735 systemd[1]: Reached target swap.target - Swaps. May 17 00:19:59.164751 systemd[1]: Reached target timers.target - Timer Units. May 17 00:19:59.164766 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:19:59.164781 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:19:59.164797 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:19:59.164812 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:19:59.164829 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:19:59.164843 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:19:59.164859 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:19:59.164878 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:19:59.164894 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:19:59.164911 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:19:59.164927 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:19:59.164941 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:19:59.164954 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:19:59.164966 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:19:59.165019 systemd-journald[176]: Collecting audit messages is disabled. May 17 00:19:59.165047 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:19:59.165056 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:19:59.165068 systemd-journald[176]: Journal started May 17 00:19:59.165102 systemd-journald[176]: Runtime Journal (/run/log/journal/2878b21214b94c0ab1f4ad61cb9b34be) is 8.0M, max 158.8M, 150.8M free. May 17 00:19:59.171021 systemd-modules-load[177]: Inserted module 'overlay' May 17 00:19:59.178679 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:19:59.184038 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:19:59.191945 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:19:59.216643 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:19:59.219415 systemd-modules-load[177]: Inserted module 'br_netfilter' May 17 00:19:59.222128 kernel: Bridge firewalling registered May 17 00:19:59.222379 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:19:59.230740 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:19:59.234524 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:19:59.241894 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:19:59.248935 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:19:59.255579 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:19:59.269716 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:19:59.280765 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:19:59.287859 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:19:59.296457 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:19:59.310867 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:19:59.318891 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:19:59.326420 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:19:59.338800 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:19:59.347670 dracut-cmdline[209]: dracut-dracut-053 May 17 00:19:59.352207 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:19:59.408469 systemd-resolved[214]: Positive Trust Anchors: May 17 00:19:59.411482 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:19:59.416536 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:19:59.440102 systemd-resolved[214]: Defaulting to hostname 'linux'. May 17 00:19:59.444412 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:19:59.447772 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:19:59.460521 kernel: SCSI subsystem initialized May 17 00:19:59.471525 kernel: Loading iSCSI transport class v2.0-870. May 17 00:19:59.482525 kernel: iscsi: registered transport (tcp) May 17 00:19:59.504855 kernel: iscsi: registered transport (qla4xxx) May 17 00:19:59.504955 kernel: QLogic iSCSI HBA Driver May 17 00:19:59.542744 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:19:59.553755 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:19:59.584373 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:19:59.584494 kernel: device-mapper: uevent: version 1.0.3 May 17 00:19:59.588434 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:19:59.636535 kernel: raid6: avx512x4 gen() 17775 MB/s May 17 00:19:59.655514 kernel: raid6: avx512x2 gen() 17903 MB/s May 17 00:19:59.674513 kernel: raid6: avx512x1 gen() 17771 MB/s May 17 00:19:59.693520 kernel: raid6: avx2x4 gen() 17768 MB/s May 17 00:19:59.712513 kernel: raid6: avx2x2 gen() 17755 MB/s May 17 00:19:59.732849 kernel: raid6: avx2x1 gen() 13429 MB/s May 17 00:19:59.732897 kernel: raid6: using algorithm avx512x2 gen() 17903 MB/s May 17 00:19:59.756545 kernel: raid6: .... xor() 28062 MB/s, rmw enabled May 17 00:19:59.756689 kernel: raid6: using avx512x2 recovery algorithm May 17 00:19:59.782089 kernel: xor: automatically using best checksumming function avx May 17 00:19:59.934531 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:19:59.945041 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:19:59.957721 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:19:59.970725 systemd-udevd[397]: Using default interface naming scheme 'v255'. May 17 00:19:59.975347 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:19:59.990763 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:20:00.004776 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation May 17 00:20:00.037120 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:20:00.050731 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:20:00.095785 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:20:00.115496 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:20:00.145885 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:20:00.155388 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:20:00.163175 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:20:00.169937 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:20:00.181747 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:20:00.208522 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:20:00.212136 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:20:00.238567 kernel: hv_vmbus: Vmbus version:5.2 May 17 00:20:00.257548 kernel: hv_vmbus: registering driver hyperv_keyboard May 17 00:20:00.263517 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 17 00:20:00.263587 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:20:00.268469 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:20:00.268670 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:20:00.283535 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:20:00.283616 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:20:00.286284 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:20:00.295934 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:20:00.296341 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:00.308705 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:20:00.320271 kernel: AES CTR mode by8 optimization enabled May 17 00:20:00.320348 kernel: hv_vmbus: registering driver hv_storvsc May 17 00:20:00.325739 kernel: scsi host0: storvsc_host_t May 17 00:20:00.325989 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:20:00.326012 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 17 00:20:00.329920 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:20:00.338792 kernel: scsi host1: storvsc_host_t May 17 00:20:00.339025 kernel: hv_vmbus: registering driver hv_netvsc May 17 00:20:00.348316 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 17 00:20:00.360273 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:20:00.362330 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:00.375519 kernel: hv_vmbus: registering driver hid_hyperv May 17 00:20:00.375578 kernel: PTP clock support registered May 17 00:20:00.383844 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:20:00.396716 kernel: hv_utils: Registering HyperV Utility Driver May 17 00:20:00.396788 kernel: hv_vmbus: registering driver hv_utils May 17 00:20:00.399079 kernel: hv_utils: Heartbeat IC version 3.0 May 17 00:20:00.399148 kernel: hv_utils: Shutdown IC version 3.2 May 17 00:20:00.399513 kernel: hv_utils: TimeSync IC version 4.0 May 17 00:20:01.263034 systemd-resolved[214]: Clock change detected. Flushing caches. May 17 00:20:01.299668 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 17 00:20:01.299695 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 17 00:20:01.299881 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 17 00:20:01.300027 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:20:01.300047 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 17 00:20:01.317714 kernel: hv_netvsc 7ced8d2d-de81-7ced-8d2d-de817ced8d2d eth0: VF slot 1 added May 17 00:20:01.339219 kernel: hv_vmbus: registering driver hv_pci May 17 00:20:01.344254 kernel: hv_pci 941af122-800d-4636-ba07-afe5909703f0: PCI VMBus probing: Using version 0x10004 May 17 00:20:01.347053 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:01.352867 kernel: hv_pci 941af122-800d-4636-ba07-afe5909703f0: PCI host bridge to bus 800d:00 May 17 00:20:01.365571 kernel: pci_bus 800d:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] May 17 00:20:01.365994 kernel: pci_bus 800d:00: No busn resource found for root bus, will use [bus 00-ff] May 17 00:20:01.366538 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:20:01.380115 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 17 00:20:01.380500 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 17 00:20:01.383281 kernel: pci 800d:00:02.0: [15b3:1016] type 00 class 0x020000 May 17 00:20:01.394763 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:20:01.395042 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 17 00:20:01.395151 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 17 00:20:01.400257 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:20:01.404220 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:20:01.513236 kernel: pci 800d:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] May 17 00:20:01.518235 kernel: pci 800d:00:02.0: enabling Extended Tags May 17 00:20:01.534283 kernel: pci 800d:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 800d:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 00:20:01.542029 kernel: pci_bus 800d:00: busn_res: [bus 00-ff] end is updated to 00 May 17 00:20:01.542452 kernel: pci 800d:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] May 17 00:20:01.569421 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:20:01.631251 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (447) May 17 00:20:01.645790 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 17 00:20:01.713537 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 17 00:20:01.766257 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (467) May 17 00:20:01.789717 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 17 00:20:01.815549 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 17 00:20:01.875237 kernel: mlx5_core 800d:00:02.0: enabling device (0000 -> 0002) May 17 00:20:01.875763 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:20:01.894216 kernel: mlx5_core 800d:00:02.0: firmware version: 14.30.5000 May 17 00:20:01.904257 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:20:01.915214 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:20:02.134487 kernel: hv_netvsc 7ced8d2d-de81-7ced-8d2d-de817ced8d2d eth0: VF registering: eth1 May 17 00:20:02.136180 kernel: mlx5_core 800d:00:02.0 eth1: joined to eth0 May 17 00:20:02.151215 kernel: mlx5_core 800d:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 00:20:02.171223 kernel: mlx5_core 800d:00:02.0 enP32781s1: renamed from eth1 May 17 00:20:02.239380 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 17 00:20:02.917134 disk-uuid[597]: The operation has completed successfully. May 17 00:20:02.919957 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:20:02.996831 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:20:02.996957 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:20:03.027411 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:20:03.034847 sh[690]: Success May 17 00:20:03.055246 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:20:03.272992 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:20:03.287360 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:20:03.293685 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:20:03.314455 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:20:03.315417 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:20:03.321021 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:20:03.324326 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:20:03.327202 kernel: BTRFS info (device dm-0): using free space tree May 17 00:20:03.867833 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:20:03.872049 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:20:03.881512 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:20:03.887366 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:20:03.908216 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:03.914454 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:20:03.914563 kernel: BTRFS info (device sda6): using free space tree May 17 00:20:03.961224 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:20:03.971798 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:20:03.978835 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:03.989013 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:20:03.997221 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:20:04.005526 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:20:04.014403 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:20:04.040578 systemd-networkd[874]: lo: Link UP May 17 00:20:04.040592 systemd-networkd[874]: lo: Gained carrier May 17 00:20:04.042994 systemd-networkd[874]: Enumeration completed May 17 00:20:04.043113 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:20:04.046665 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:20:04.046669 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:20:04.047839 systemd[1]: Reached target network.target - Network. May 17 00:20:04.108224 kernel: mlx5_core 800d:00:02.0 enP32781s1: Link up May 17 00:20:04.158228 kernel: hv_netvsc 7ced8d2d-de81-7ced-8d2d-de817ced8d2d eth0: Data path switched to VF: enP32781s1 May 17 00:20:04.159672 systemd-networkd[874]: enP32781s1: Link UP May 17 00:20:04.159855 systemd-networkd[874]: eth0: Link UP May 17 00:20:04.160087 systemd-networkd[874]: eth0: Gained carrier May 17 00:20:04.160105 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:20:04.167519 systemd-networkd[874]: enP32781s1: Gained carrier May 17 00:20:04.204286 systemd-networkd[874]: eth0: DHCPv4 address 10.200.8.41/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 17 00:20:05.424501 systemd-networkd[874]: enP32781s1: Gained IPv6LL May 17 00:20:05.488581 systemd-networkd[874]: eth0: Gained IPv6LL May 17 00:20:05.709624 ignition[873]: Ignition 2.19.0 May 17 00:20:05.709637 ignition[873]: Stage: fetch-offline May 17 00:20:05.709686 ignition[873]: no configs at "/usr/lib/ignition/base.d" May 17 00:20:05.709699 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:20:05.709814 ignition[873]: parsed url from cmdline: "" May 17 00:20:05.709818 ignition[873]: no config URL provided May 17 00:20:05.709834 ignition[873]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:20:05.709846 ignition[873]: no config at "/usr/lib/ignition/user.ign" May 17 00:20:05.709854 ignition[873]: failed to fetch config: resource requires networking May 17 00:20:05.710103 ignition[873]: Ignition finished successfully May 17 00:20:05.734964 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:20:05.747453 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:20:05.766704 ignition[882]: Ignition 2.19.0 May 17 00:20:05.766717 ignition[882]: Stage: fetch May 17 00:20:05.766963 ignition[882]: no configs at "/usr/lib/ignition/base.d" May 17 00:20:05.766977 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:20:05.767100 ignition[882]: parsed url from cmdline: "" May 17 00:20:05.767105 ignition[882]: no config URL provided May 17 00:20:05.767111 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:20:05.767119 ignition[882]: no config at "/usr/lib/ignition/user.ign" May 17 00:20:05.767141 ignition[882]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 17 00:20:05.840905 ignition[882]: GET result: OK May 17 00:20:05.840997 ignition[882]: config has been read from IMDS userdata May 17 00:20:05.844768 unknown[882]: fetched base config from "system" May 17 00:20:05.841028 ignition[882]: parsing config with SHA512: 9c66a48eb3cc26a26e5885c49e491e75c6bd9f473796ad0c7afc430ccbfb03a6e94942eb7e2583ef1f30287255efaf970e6a2b08261c8c505f7e5e4f83bd5670 May 17 00:20:05.844776 unknown[882]: fetched base config from "system" May 17 00:20:05.845124 ignition[882]: fetch: fetch complete May 17 00:20:05.844782 unknown[882]: fetched user config from "azure" May 17 00:20:05.845130 ignition[882]: fetch: fetch passed May 17 00:20:05.847007 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:20:05.845176 ignition[882]: Ignition finished successfully May 17 00:20:05.860454 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:20:05.879356 ignition[888]: Ignition 2.19.0 May 17 00:20:05.879367 ignition[888]: Stage: kargs May 17 00:20:05.879609 ignition[888]: no configs at "/usr/lib/ignition/base.d" May 17 00:20:05.879624 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:20:05.880537 ignition[888]: kargs: kargs passed May 17 00:20:05.880590 ignition[888]: Ignition finished successfully May 17 00:20:05.892850 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:20:05.902500 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:20:05.920725 ignition[894]: Ignition 2.19.0 May 17 00:20:05.920738 ignition[894]: Stage: disks May 17 00:20:05.920995 ignition[894]: no configs at "/usr/lib/ignition/base.d" May 17 00:20:05.923308 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:20:05.921011 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:20:05.927683 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:20:05.921964 ignition[894]: disks: disks passed May 17 00:20:05.932412 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:20:05.922024 ignition[894]: Ignition finished successfully May 17 00:20:05.935546 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:20:05.940373 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:20:05.943573 systemd[1]: Reached target basic.target - Basic System. May 17 00:20:05.956396 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:20:06.028179 systemd-fsck[902]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks May 17 00:20:06.034501 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:20:06.051477 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:20:06.148221 kernel: EXT4-fs (sda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:20:06.149632 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:20:06.153033 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:20:06.193363 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:20:06.198096 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:20:06.209278 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (913) May 17 00:20:06.218860 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:06.218990 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:20:06.219012 kernel: BTRFS info (device sda6): using free space tree May 17 00:20:06.215576 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 00:20:06.224175 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:20:06.224795 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:20:06.227348 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:20:06.253555 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:20:06.256603 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:20:06.266834 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:20:07.237043 coreos-metadata[915]: May 17 00:20:07.236 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 17 00:20:07.244313 coreos-metadata[915]: May 17 00:20:07.244 INFO Fetch successful May 17 00:20:07.247277 coreos-metadata[915]: May 17 00:20:07.245 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 17 00:20:07.262541 coreos-metadata[915]: May 17 00:20:07.262 INFO Fetch successful May 17 00:20:07.271213 coreos-metadata[915]: May 17 00:20:07.269 INFO wrote hostname ci-4081.3.3-n-4e81e33f0f to /sysroot/etc/hostname May 17 00:20:07.279028 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:20:07.277837 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:20:07.334033 initrd-setup-root[950]: cut: /sysroot/etc/group: No such file or directory May 17 00:20:07.366271 initrd-setup-root[957]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:20:07.395374 initrd-setup-root[964]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:20:08.299616 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:20:08.315449 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:20:08.323406 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:20:08.355976 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:08.353761 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:20:08.381476 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:20:08.403183 ignition[1035]: INFO : Ignition 2.19.0 May 17 00:20:08.403183 ignition[1035]: INFO : Stage: mount May 17 00:20:08.421548 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:20:08.421548 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:20:08.421548 ignition[1035]: INFO : mount: mount passed May 17 00:20:08.421548 ignition[1035]: INFO : Ignition finished successfully May 17 00:20:08.405878 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:20:08.427399 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:20:08.454430 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:20:08.481221 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1043) May 17 00:20:08.481290 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:08.490234 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:20:08.494165 kernel: BTRFS info (device sda6): using free space tree May 17 00:20:08.500234 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:20:08.502646 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:20:08.536285 ignition[1060]: INFO : Ignition 2.19.0 May 17 00:20:08.536285 ignition[1060]: INFO : Stage: files May 17 00:20:08.542859 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:20:08.542859 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:20:08.542859 ignition[1060]: DEBUG : files: compiled without relabeling support, skipping May 17 00:20:08.575370 ignition[1060]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:20:08.575370 ignition[1060]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:20:08.599315 ignition[1060]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:20:08.603558 ignition[1060]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:20:08.603558 ignition[1060]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:20:08.603558 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 17 00:20:08.603558 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 17 00:20:08.599782 unknown[1060]: wrote ssh authorized keys file for user: core May 17 00:20:08.881454 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:20:09.036302 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 17 00:20:09.042133 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:20:09.042133 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:20:09.042133 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:20:09.042133 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:20:09.042133 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:20:09.042133 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:20:09.042133 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:20:09.042133 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:20:09.042133 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:20:09.086472 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:20:09.086472 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:20:09.086472 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:20:09.086472 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:20:09.086472 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 17 00:20:09.920419 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:20:10.239728 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:20:10.239728 ignition[1060]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:20:10.251543 ignition[1060]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:20:10.258269 ignition[1060]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:20:10.258269 ignition[1060]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:20:10.267269 ignition[1060]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 17 00:20:10.267269 ignition[1060]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:20:10.275650 ignition[1060]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:20:10.280554 ignition[1060]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:20:10.285374 ignition[1060]: INFO : files: files passed May 17 00:20:10.285374 ignition[1060]: INFO : Ignition finished successfully May 17 00:20:10.282553 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:20:10.302576 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:20:10.308979 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:20:10.313309 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:20:10.313425 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:20:10.346924 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:20:10.346924 initrd-setup-root-after-ignition[1088]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:20:10.360296 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:20:10.350853 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:20:10.356699 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:20:10.374278 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:20:10.411568 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:20:10.411697 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:20:10.418807 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:20:10.424813 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:20:10.430087 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:20:10.442368 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:20:10.457006 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:20:10.471349 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:20:10.488254 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:20:10.489668 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:20:10.490108 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:20:10.491045 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:20:10.491202 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:20:10.491990 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:20:10.492492 systemd[1]: Stopped target basic.target - Basic System. May 17 00:20:10.492963 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:20:10.493430 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:20:10.493898 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:20:10.494410 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:20:10.494868 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:20:10.495358 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:20:10.495811 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:20:10.496267 systemd[1]: Stopped target swap.target - Swaps. May 17 00:20:10.496750 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:20:10.496872 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:20:10.497689 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:20:10.498149 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:20:10.499104 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:20:10.537993 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:20:10.549177 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:20:10.629080 ignition[1112]: INFO : Ignition 2.19.0 May 17 00:20:10.629080 ignition[1112]: INFO : Stage: umount May 17 00:20:10.629080 ignition[1112]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:20:10.629080 ignition[1112]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:20:10.629080 ignition[1112]: INFO : umount: umount passed May 17 00:20:10.629080 ignition[1112]: INFO : Ignition finished successfully May 17 00:20:10.549370 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:20:10.555348 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:20:10.555521 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:20:10.561279 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:20:10.561432 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:20:10.566378 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:20:10.566526 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:20:10.589569 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:20:10.595710 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:20:10.595963 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:20:10.608301 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:20:10.615627 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:20:10.615880 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:20:10.624077 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:20:10.624467 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:20:10.633216 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:20:10.633355 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:20:10.639500 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:20:10.644500 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:20:10.652467 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:20:10.652567 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:20:10.657702 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:20:10.660537 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:20:10.669957 systemd[1]: Stopped target network.target - Network. May 17 00:20:10.678509 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:20:10.678586 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:20:10.678924 systemd[1]: Stopped target paths.target - Path Units. May 17 00:20:10.679434 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:20:10.692163 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:20:10.695329 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:20:10.702262 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:20:10.707921 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:20:10.707980 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:20:10.721550 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:20:10.724276 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:20:10.729858 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:20:10.729943 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:20:10.735126 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:20:10.735205 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:20:10.741070 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:20:10.746059 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:20:10.757954 systemd-networkd[874]: eth0: DHCPv6 lease lost May 17 00:20:10.763812 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:20:10.806504 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:20:10.812569 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:20:10.822300 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:20:10.822482 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:20:10.833606 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:20:10.833723 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:20:10.841568 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:20:10.841654 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:20:10.865459 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:20:10.868231 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:20:10.868344 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:20:10.872420 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:20:10.872490 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:20:10.881344 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:20:10.881406 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:20:10.881723 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:20:10.881762 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:20:10.882287 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:20:10.915717 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:20:10.915905 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:20:10.920104 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:20:10.920197 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:20:10.947340 kernel: hv_netvsc 7ced8d2d-de81-7ced-8d2d-de817ced8d2d eth0: Data path switched from VF: enP32781s1 May 17 00:20:10.920919 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:20:10.920963 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:20:10.921345 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:20:10.921392 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:20:10.922801 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:20:10.922857 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:20:10.923697 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:20:10.923735 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:20:10.942572 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:20:10.949823 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:20:10.949898 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:20:10.950769 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:20:10.950809 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:10.993683 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:20:10.993818 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:20:11.001895 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:20:11.002015 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:20:11.185942 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:20:11.186113 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:20:11.194263 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:20:11.200166 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:20:11.200281 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:20:11.213400 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:20:11.224170 systemd[1]: Switching root. May 17 00:20:11.258204 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). May 17 00:20:11.258293 systemd-journald[176]: Journal stopped May 17 00:20:14.856441 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:20:14.856485 kernel: SELinux: policy capability open_perms=1 May 17 00:20:14.856498 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:20:14.856506 kernel: SELinux: policy capability always_check_network=0 May 17 00:20:14.856517 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:20:14.856525 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:20:14.856537 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:20:14.856549 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:20:14.856559 kernel: audit: type=1403 audit(1747441212.181:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:20:14.856571 systemd[1]: Successfully loaded SELinux policy in 106.393ms. May 17 00:20:14.856584 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.529ms. May 17 00:20:14.856596 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:20:14.856606 systemd[1]: Detected virtualization microsoft. May 17 00:20:14.856618 systemd[1]: Detected architecture x86-64. May 17 00:20:14.856630 systemd[1]: Detected first boot. May 17 00:20:14.856641 systemd[1]: Hostname set to . May 17 00:20:14.856653 systemd[1]: Initializing machine ID from random generator. May 17 00:20:14.856663 zram_generator::config[1155]: No configuration found. May 17 00:20:14.856673 systemd[1]: Populated /etc with preset unit settings. May 17 00:20:14.856687 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:20:14.856700 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:20:14.856709 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:20:14.856720 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:20:14.856729 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:20:14.856740 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:20:14.856749 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:20:14.856762 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:20:14.856773 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:20:14.856782 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:20:14.856792 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:20:14.856802 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:20:14.856815 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:20:14.856825 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:20:14.856836 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:20:14.856850 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:20:14.856860 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:20:14.856873 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 00:20:14.856882 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:20:14.856895 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:20:14.856905 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:20:14.856921 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:20:14.856933 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:20:14.856946 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:20:14.856959 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:20:14.856973 systemd[1]: Reached target slices.target - Slice Units. May 17 00:20:14.856984 systemd[1]: Reached target swap.target - Swaps. May 17 00:20:14.856997 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:20:14.857008 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:20:14.857020 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:20:14.857033 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:20:14.857048 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:20:14.857062 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:20:14.857073 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:20:14.857083 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:20:14.857095 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:20:14.857106 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:14.857116 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:20:14.857129 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:20:14.857140 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:20:14.857152 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:20:14.857163 systemd[1]: Reached target machines.target - Containers. May 17 00:20:14.857174 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:20:14.857198 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:20:14.857215 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:20:14.857226 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:20:14.857237 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:20:14.857249 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:20:14.857259 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:20:14.857273 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:20:14.857283 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:20:14.857295 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:20:14.857310 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:20:14.857320 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:20:14.857330 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:20:14.857340 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:20:14.857352 kernel: loop: module loaded May 17 00:20:14.857363 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:20:14.857373 kernel: ACPI: bus type drm_connector registered May 17 00:20:14.857382 kernel: fuse: init (API version 7.39) May 17 00:20:14.857394 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:20:14.857408 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:20:14.857420 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:20:14.857462 systemd-journald[1232]: Collecting audit messages is disabled. May 17 00:20:14.857487 systemd-journald[1232]: Journal started May 17 00:20:14.857515 systemd-journald[1232]: Runtime Journal (/run/log/journal/4c62fb6557774239aad2efb7e0eb170a) is 8.0M, max 158.8M, 150.8M free. May 17 00:20:14.084101 systemd[1]: Queued start job for default target multi-user.target. May 17 00:20:14.235365 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 17 00:20:14.235787 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:20:14.871918 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:20:14.877215 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:20:14.877300 systemd[1]: Stopped verity-setup.service. May 17 00:20:14.884344 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:14.899218 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:20:14.900401 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:20:14.903706 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:20:14.907441 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:20:14.910989 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:20:14.915062 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:20:14.919421 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:20:14.924836 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:20:14.929344 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:20:14.929580 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:20:14.934764 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:20:14.934980 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:20:14.939981 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:20:14.940384 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:20:14.946240 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:20:14.946511 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:20:14.951000 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:20:14.952243 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:20:14.956917 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:20:14.957144 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:20:14.964035 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:20:14.968592 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:20:14.974666 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:20:14.980492 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:20:15.002127 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:20:15.010363 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:20:15.016364 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:20:15.019860 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:20:15.020080 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:20:15.025170 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:20:15.035434 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:20:15.049439 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:20:15.053364 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:20:15.058584 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:20:15.080672 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:20:15.084279 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:20:15.085688 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:20:15.089078 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:20:15.093615 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:20:15.100476 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:20:15.108388 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:20:15.114877 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:20:15.120025 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:20:15.127846 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:20:15.138968 systemd-journald[1232]: Time spent on flushing to /var/log/journal/4c62fb6557774239aad2efb7e0eb170a is 42.380ms for 957 entries. May 17 00:20:15.138968 systemd-journald[1232]: System Journal (/var/log/journal/4c62fb6557774239aad2efb7e0eb170a) is 8.0M, max 2.6G, 2.6G free. May 17 00:20:15.216846 systemd-journald[1232]: Received client request to flush runtime journal. May 17 00:20:15.216908 kernel: loop0: detected capacity change from 0 to 142488 May 17 00:20:15.144059 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:20:15.162385 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:20:15.170938 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:20:15.184730 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:20:15.193693 udevadm[1292]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:20:15.196040 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:20:15.221800 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:20:15.242841 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:20:15.287506 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:20:15.289762 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:20:15.330263 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:20:15.341455 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:20:15.451041 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. May 17 00:20:15.451665 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. May 17 00:20:15.459370 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:20:15.822381 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:20:15.858240 kernel: loop1: detected capacity change from 0 to 140768 May 17 00:20:15.991223 kernel: loop2: detected capacity change from 0 to 31056 May 17 00:20:16.122248 kernel: loop3: detected capacity change from 0 to 229808 May 17 00:20:16.157328 kernel: loop4: detected capacity change from 0 to 142488 May 17 00:20:16.179216 kernel: loop5: detected capacity change from 0 to 140768 May 17 00:20:16.193235 kernel: loop6: detected capacity change from 0 to 31056 May 17 00:20:16.201264 kernel: loop7: detected capacity change from 0 to 229808 May 17 00:20:16.225697 (sd-merge)[1316]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 17 00:20:16.226560 (sd-merge)[1316]: Merged extensions into '/usr'. May 17 00:20:16.230432 systemd[1]: Reloading requested from client PID 1291 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:20:16.230453 systemd[1]: Reloading... May 17 00:20:16.305226 zram_generator::config[1341]: No configuration found. May 17 00:20:16.512375 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:20:16.624007 systemd[1]: Reloading finished in 392 ms. May 17 00:20:16.652603 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:20:16.666501 systemd[1]: Starting ensure-sysext.service... May 17 00:20:16.671185 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:20:16.925918 systemd-tmpfiles[1401]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:20:16.926552 systemd-tmpfiles[1401]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:20:16.927687 systemd-tmpfiles[1401]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:20:16.928070 systemd-tmpfiles[1401]: ACLs are not supported, ignoring. May 17 00:20:16.928121 systemd-tmpfiles[1401]: ACLs are not supported, ignoring. May 17 00:20:17.298359 systemd-tmpfiles[1401]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:20:17.298406 systemd-tmpfiles[1401]: Skipping /boot May 17 00:20:17.303406 systemd[1]: Reloading requested from client PID 1400 ('systemctl') (unit ensure-sysext.service)... May 17 00:20:17.303439 systemd[1]: Reloading... May 17 00:20:17.354989 systemd-tmpfiles[1401]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:20:17.355016 systemd-tmpfiles[1401]: Skipping /boot May 17 00:20:17.433224 zram_generator::config[1428]: No configuration found. May 17 00:20:17.615056 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:20:17.677795 systemd[1]: Reloading finished in 373 ms. May 17 00:20:17.697417 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:20:17.713014 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:20:17.731548 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:20:17.745650 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:20:17.759807 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:20:17.766632 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:20:17.781932 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:20:17.793534 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:20:17.803343 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:17.803609 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:20:17.814016 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:20:17.821670 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:20:17.834039 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:20:17.840969 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:20:17.841178 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:17.842927 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:20:17.843119 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:20:17.851818 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:20:17.852013 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:20:17.852449 systemd-udevd[1496]: Using default interface naming scheme 'v255'. May 17 00:20:17.862919 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:20:17.863108 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:20:17.867975 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:20:17.868870 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:20:17.874619 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:20:17.886042 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:17.886400 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:20:17.890482 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:20:17.907705 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:20:17.920244 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:20:17.926550 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:20:17.926800 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:17.928058 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:20:17.943601 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:20:17.948677 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:20:17.957496 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... May 17 00:20:17.960611 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:17.961031 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:20:17.964333 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:20:17.968808 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:20:17.968909 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:20:17.972419 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:17.973103 systemd[1]: Finished ensure-sysext.service. May 17 00:20:17.976157 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:20:17.976895 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:20:17.986513 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:20:17.987337 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:20:17.995731 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:20:17.995973 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:20:18.000367 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:20:18.000576 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:20:18.010143 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:20:18.011758 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:20:18.012318 augenrules[1528]: No rules May 17 00:20:18.016343 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:20:18.048760 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:20:18.062575 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:20:18.122869 systemd-resolved[1495]: Positive Trust Anchors: May 17 00:20:18.122884 systemd-resolved[1495]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:20:18.122928 systemd-resolved[1495]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:20:18.124961 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:20:18.129116 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:20:18.150082 systemd-resolved[1495]: Using system hostname 'ci-4081.3.3-n-4e81e33f0f'. May 17 00:20:18.153858 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:20:18.164182 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:20:18.204164 systemd-networkd[1543]: lo: Link UP May 17 00:20:18.204181 systemd-networkd[1543]: lo: Gained carrier May 17 00:20:18.209806 systemd-networkd[1543]: Enumeration completed May 17 00:20:18.209964 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:20:18.214780 systemd[1]: Reached target network.target - Network. May 17 00:20:18.225486 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:20:18.242975 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 17 00:20:18.266663 systemd-networkd[1543]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:20:18.266676 systemd-networkd[1543]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:20:18.297616 kernel: mlx5_core 800d:00:02.0 enP32781s1: Link up May 17 00:20:18.330252 kernel: hv_netvsc 7ced8d2d-de81-7ced-8d2d-de817ced8d2d eth0: Data path switched to VF: enP32781s1 May 17 00:20:18.331572 systemd-networkd[1543]: enP32781s1: Link UP May 17 00:20:18.331740 systemd-networkd[1543]: eth0: Link UP May 17 00:20:18.331746 systemd-networkd[1543]: eth0: Gained carrier May 17 00:20:18.331776 systemd-networkd[1543]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:20:18.339646 systemd-networkd[1543]: enP32781s1: Gained carrier May 17 00:20:18.369106 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. May 17 00:20:18.378298 systemd-networkd[1543]: eth0: DHCPv4 address 10.200.8.41/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 17 00:20:18.402729 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:20:18.419595 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:20:18.419819 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:18.442072 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:20:18.438875 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:20:18.501393 kernel: hv_vmbus: registering driver hv_balloon May 17 00:20:18.510221 kernel: hv_vmbus: registering driver hyperv_fb May 17 00:20:18.518866 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 17 00:20:18.519005 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 17 00:20:18.526235 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 17 00:20:18.531500 kernel: Console: switching to colour dummy device 80x25 May 17 00:20:18.536327 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:20:18.583238 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1542) May 17 00:20:18.689140 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:20:18.689482 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:18.708549 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:20:18.827225 kernel: kvm_intel: Using Hyper-V Enlightened VMCS May 17 00:20:18.854973 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 17 00:20:18.868404 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:20:18.908235 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:20:18.913270 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:18.928549 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:20:18.936405 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:20:19.037686 lvm[1635]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:20:19.075755 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:20:19.089052 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:20:19.100551 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:20:19.107662 lvm[1637]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:20:19.136441 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:20:19.440439 systemd-networkd[1543]: eth0: Gained IPv6LL May 17 00:20:19.444418 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:20:19.448559 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:20:20.336505 systemd-networkd[1543]: enP32781s1: Gained IPv6LL May 17 00:20:23.643348 ldconfig[1286]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:20:23.654033 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:20:23.665497 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:20:23.697008 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:20:23.701756 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:20:23.705095 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:20:23.708441 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:20:23.711916 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:20:23.715164 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:20:23.718375 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:20:23.721474 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:20:23.721515 systemd[1]: Reached target paths.target - Path Units. May 17 00:20:23.724338 systemd[1]: Reached target timers.target - Timer Units. May 17 00:20:23.728933 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:20:23.734794 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:20:23.747865 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:20:23.752722 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:20:23.756022 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:20:23.759313 systemd[1]: Reached target basic.target - Basic System. May 17 00:20:23.762803 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:20:23.762852 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:20:23.771755 systemd[1]: Starting chronyd.service - NTP client/server... May 17 00:20:23.781388 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:20:23.804150 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:20:23.812515 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:20:23.830688 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:20:23.837581 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:20:23.841841 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:20:23.841901 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). May 17 00:20:23.846802 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. May 17 00:20:23.856151 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). May 17 00:20:23.864397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:20:23.874790 jq[1649]: false May 17 00:20:23.880413 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:20:23.898438 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:20:23.905710 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:20:23.908596 (chronyd)[1645]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 17 00:20:23.927444 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:20:23.931044 KVP[1651]: KVP starting; pid is:1651 May 17 00:20:23.944544 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:20:23.953742 chronyd[1663]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 17 00:20:23.957458 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:20:23.966261 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:20:23.967010 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:20:23.970136 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:20:23.983364 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:20:24.000717 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:20:24.001003 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:20:24.027805 update_engine[1666]: I20250517 00:20:24.027701 1666 main.cc:92] Flatcar Update Engine starting May 17 00:20:24.030856 dbus-daemon[1648]: [system] SELinux support is enabled May 17 00:20:24.031549 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:20:24.042465 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:20:24.042520 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:20:24.045156 extend-filesystems[1650]: Found loop4 May 17 00:20:24.045156 extend-filesystems[1650]: Found loop5 May 17 00:20:24.045156 extend-filesystems[1650]: Found loop6 May 17 00:20:24.045156 extend-filesystems[1650]: Found loop7 May 17 00:20:24.045156 extend-filesystems[1650]: Found sda May 17 00:20:24.045156 extend-filesystems[1650]: Found sda1 May 17 00:20:24.045156 extend-filesystems[1650]: Found sda2 May 17 00:20:24.045156 extend-filesystems[1650]: Found sda3 May 17 00:20:24.045156 extend-filesystems[1650]: Found usr May 17 00:20:24.045156 extend-filesystems[1650]: Found sda4 May 17 00:20:24.045156 extend-filesystems[1650]: Found sda6 May 17 00:20:24.045156 extend-filesystems[1650]: Found sda7 May 17 00:20:24.045156 extend-filesystems[1650]: Found sda9 May 17 00:20:24.045156 extend-filesystems[1650]: Checking size of /dev/sda9 May 17 00:20:24.255504 kernel: hv_utils: KVP IC version 4.0 May 17 00:20:24.256773 update_engine[1666]: I20250517 00:20:24.066512 1666 update_check_scheduler.cc:74] Next update check in 2m29s May 17 00:20:24.051174 chronyd[1663]: Timezone right/UTC failed leap second check, ignoring May 17 00:20:24.259290 extend-filesystems[1650]: Old size kept for /dev/sda9 May 17 00:20:24.259290 extend-filesystems[1650]: Found sr0 May 17 00:20:24.047528 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:20:24.051612 chronyd[1663]: Loaded seccomp filter (level 2) May 17 00:20:24.047562 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:20:24.294483 jq[1667]: true May 17 00:20:24.093145 KVP[1651]: KVP LIC Version: 3.1 May 17 00:20:24.060366 systemd[1]: Started chronyd.service - NTP client/server. May 17 00:20:24.224246 dbus-daemon[1648]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:20:24.073433 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:20:24.298833 tar[1675]: linux-amd64/LICENSE May 17 00:20:24.298833 tar[1675]: linux-amd64/helm May 17 00:20:24.073694 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:20:24.312023 coreos-metadata[1647]: May 17 00:20:24.310 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 17 00:20:24.312417 jq[1690]: true May 17 00:20:24.317246 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1702) May 17 00:20:24.080346 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:20:24.082309 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:20:24.099868 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:20:24.147531 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:20:24.147782 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:20:24.149308 systemd-logind[1665]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:20:24.157640 systemd-logind[1665]: New seat seat0. May 17 00:20:24.324683 coreos-metadata[1647]: May 17 00:20:24.323 INFO Fetch successful May 17 00:20:24.324683 coreos-metadata[1647]: May 17 00:20:24.324 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 17 00:20:24.163321 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:20:24.200757 (ntainerd)[1701]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:20:24.220756 systemd[1]: Started update-engine.service - Update Engine. May 17 00:20:24.245576 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:20:24.333617 coreos-metadata[1647]: May 17 00:20:24.329 INFO Fetch successful May 17 00:20:24.333617 coreos-metadata[1647]: May 17 00:20:24.332 INFO Fetching http://168.63.129.16/machine/f0bb0410-f6f3-4da4-add0-2c24178c965a/a06aa1c1%2D82af%2D4b5e%2D81be%2Df23dbc807975.%5Fci%2D4081.3.3%2Dn%2D4e81e33f0f?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 17 00:20:24.335445 coreos-metadata[1647]: May 17 00:20:24.335 INFO Fetch successful May 17 00:20:24.339451 coreos-metadata[1647]: May 17 00:20:24.337 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 17 00:20:24.359857 coreos-metadata[1647]: May 17 00:20:24.359 INFO Fetch successful May 17 00:20:24.509162 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:20:24.517461 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:20:24.621354 sshd_keygen[1697]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:20:24.698473 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:20:24.714635 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:20:24.719491 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 17 00:20:24.743504 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:20:24.744305 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:20:24.757694 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:20:24.781140 locksmithd[1715]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:20:24.805595 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 17 00:20:25.083290 bash[1740]: Updated "/home/core/.ssh/authorized_keys" May 17 00:20:25.085352 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:20:25.096760 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 17 00:20:25.113267 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:20:25.127412 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:20:25.142737 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 00:20:25.150045 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:20:25.182344 tar[1675]: linux-amd64/README.md May 17 00:20:25.201292 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:20:25.496276 containerd[1701]: time="2025-05-17T00:20:25.495647100Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:20:25.527452 containerd[1701]: time="2025-05-17T00:20:25.527333400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:20:25.529102 containerd[1701]: time="2025-05-17T00:20:25.529044200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:20:25.529102 containerd[1701]: time="2025-05-17T00:20:25.529085800Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:20:25.529102 containerd[1701]: time="2025-05-17T00:20:25.529108000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:20:25.529347 containerd[1701]: time="2025-05-17T00:20:25.529319800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:20:25.529393 containerd[1701]: time="2025-05-17T00:20:25.529351300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:20:25.529459 containerd[1701]: time="2025-05-17T00:20:25.529436800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:20:25.529459 containerd[1701]: time="2025-05-17T00:20:25.529454600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:20:25.529682 containerd[1701]: time="2025-05-17T00:20:25.529655600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:20:25.529682 containerd[1701]: time="2025-05-17T00:20:25.529676300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:20:25.529786 containerd[1701]: time="2025-05-17T00:20:25.529695200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:20:25.529786 containerd[1701]: time="2025-05-17T00:20:25.529707900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:20:25.529864 containerd[1701]: time="2025-05-17T00:20:25.529802800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:20:25.530061 containerd[1701]: time="2025-05-17T00:20:25.530030800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:20:25.530216 containerd[1701]: time="2025-05-17T00:20:25.530179000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:20:25.530282 containerd[1701]: time="2025-05-17T00:20:25.530215300Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:20:25.530348 containerd[1701]: time="2025-05-17T00:20:25.530325100Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:20:25.530423 containerd[1701]: time="2025-05-17T00:20:25.530404700Z" level=info msg="metadata content store policy set" policy=shared May 17 00:20:25.887041 containerd[1701]: time="2025-05-17T00:20:25.885962100Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:20:25.887041 containerd[1701]: time="2025-05-17T00:20:25.886055000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:20:25.887041 containerd[1701]: time="2025-05-17T00:20:25.886078500Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:20:25.887041 containerd[1701]: time="2025-05-17T00:20:25.886100100Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:20:25.887041 containerd[1701]: time="2025-05-17T00:20:25.886122200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:20:25.887041 containerd[1701]: time="2025-05-17T00:20:25.886370200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:20:25.887041 containerd[1701]: time="2025-05-17T00:20:25.886692900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:20:25.887041 containerd[1701]: time="2025-05-17T00:20:25.886858600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:20:25.887041 containerd[1701]: time="2025-05-17T00:20:25.886883400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:20:25.887041 containerd[1701]: time="2025-05-17T00:20:25.886901500Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:20:25.887041 containerd[1701]: time="2025-05-17T00:20:25.886937700Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:20:25.887041 containerd[1701]: time="2025-05-17T00:20:25.886964700Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:20:25.887041 containerd[1701]: time="2025-05-17T00:20:25.886982400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:20:25.887041 containerd[1701]: time="2025-05-17T00:20:25.887001600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:20:25.888552 containerd[1701]: time="2025-05-17T00:20:25.887020500Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:20:25.888552 containerd[1701]: time="2025-05-17T00:20:25.887039000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:20:25.888552 containerd[1701]: time="2025-05-17T00:20:25.887054600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:20:25.888552 containerd[1701]: time="2025-05-17T00:20:25.887100100Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:20:25.888552 containerd[1701]: time="2025-05-17T00:20:25.887126800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:20:25.888552 containerd[1701]: time="2025-05-17T00:20:25.887145200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:20:25.888552 containerd[1701]: time="2025-05-17T00:20:25.887220300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:20:25.888552 containerd[1701]: time="2025-05-17T00:20:25.887241600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:20:25.888552 containerd[1701]: time="2025-05-17T00:20:25.887259000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:20:25.888552 containerd[1701]: time="2025-05-17T00:20:25.887278700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:20:25.888552 containerd[1701]: time="2025-05-17T00:20:25.887296400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:20:25.888552 containerd[1701]: time="2025-05-17T00:20:25.887315300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:20:25.888552 containerd[1701]: time="2025-05-17T00:20:25.887334700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:20:25.888552 containerd[1701]: time="2025-05-17T00:20:25.887356000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:20:25.889162 containerd[1701]: time="2025-05-17T00:20:25.887373800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:20:25.889162 containerd[1701]: time="2025-05-17T00:20:25.887402600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:20:25.889162 containerd[1701]: time="2025-05-17T00:20:25.887425000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:20:25.889162 containerd[1701]: time="2025-05-17T00:20:25.887449600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:20:25.889162 containerd[1701]: time="2025-05-17T00:20:25.887500800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:20:25.889162 containerd[1701]: time="2025-05-17T00:20:25.887520800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:20:25.889162 containerd[1701]: time="2025-05-17T00:20:25.887548800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:20:25.889162 containerd[1701]: time="2025-05-17T00:20:25.887618500Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:20:25.889162 containerd[1701]: time="2025-05-17T00:20:25.887644600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:20:25.889162 containerd[1701]: time="2025-05-17T00:20:25.887659800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:20:25.889162 containerd[1701]: time="2025-05-17T00:20:25.887676600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:20:25.889162 containerd[1701]: time="2025-05-17T00:20:25.887691200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:20:25.889162 containerd[1701]: time="2025-05-17T00:20:25.887708200Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:20:25.889162 containerd[1701]: time="2025-05-17T00:20:25.887721200Z" level=info msg="NRI interface is disabled by configuration." May 17 00:20:25.889680 containerd[1701]: time="2025-05-17T00:20:25.887736700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:20:25.889735 containerd[1701]: time="2025-05-17T00:20:25.888116700Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:20:25.890681 containerd[1701]: time="2025-05-17T00:20:25.888880900Z" level=info msg="Connect containerd service" May 17 00:20:25.890681 containerd[1701]: time="2025-05-17T00:20:25.890094800Z" level=info msg="using legacy CRI server" May 17 00:20:25.890681 containerd[1701]: time="2025-05-17T00:20:25.890111300Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:20:25.890681 containerd[1701]: time="2025-05-17T00:20:25.890308400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:20:25.891479 containerd[1701]: time="2025-05-17T00:20:25.891435100Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:20:25.892697 containerd[1701]: time="2025-05-17T00:20:25.891582100Z" level=info msg="Start subscribing containerd event" May 17 00:20:25.892697 containerd[1701]: time="2025-05-17T00:20:25.891653000Z" level=info msg="Start recovering state" May 17 00:20:25.892697 containerd[1701]: time="2025-05-17T00:20:25.891737000Z" level=info msg="Start event monitor" May 17 00:20:25.892697 containerd[1701]: time="2025-05-17T00:20:25.891754700Z" level=info msg="Start snapshots syncer" May 17 00:20:25.892697 containerd[1701]: time="2025-05-17T00:20:25.891767900Z" level=info msg="Start cni network conf syncer for default" May 17 00:20:25.892697 containerd[1701]: time="2025-05-17T00:20:25.891785100Z" level=info msg="Start streaming server" May 17 00:20:25.892697 containerd[1701]: time="2025-05-17T00:20:25.892253300Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:20:25.892697 containerd[1701]: time="2025-05-17T00:20:25.892309800Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:20:25.892576 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:20:25.897507 containerd[1701]: time="2025-05-17T00:20:25.897096000Z" level=info msg="containerd successfully booted in 0.403533s" May 17 00:20:25.960714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:25.965067 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:20:25.968382 systemd[1]: Startup finished in 915ms (firmware) + 45.462s (loader) + 1.227s (kernel) + 12.494s (initrd) + 13.891s (userspace) = 1min 13.991s. May 17 00:20:25.979534 (kubelet)[1812]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:20:26.203306 login[1795]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:20:26.206511 login[1796]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:20:26.223427 systemd-logind[1665]: New session 1 of user core. May 17 00:20:26.224799 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:20:26.233729 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:20:26.240159 systemd-logind[1665]: New session 2 of user core. May 17 00:20:26.262641 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:20:26.277170 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:20:26.284307 (systemd)[1823]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:20:26.612363 systemd[1823]: Queued start job for default target default.target. May 17 00:20:26.617862 systemd[1823]: Created slice app.slice - User Application Slice. May 17 00:20:26.617917 systemd[1823]: Reached target paths.target - Paths. May 17 00:20:26.617935 systemd[1823]: Reached target timers.target - Timers. May 17 00:20:26.622066 systemd[1823]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:20:26.646535 systemd[1823]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:20:26.648591 systemd[1823]: Reached target sockets.target - Sockets. May 17 00:20:26.648617 systemd[1823]: Reached target basic.target - Basic System. May 17 00:20:26.648677 systemd[1823]: Reached target default.target - Main User Target. May 17 00:20:26.648716 systemd[1823]: Startup finished in 350ms. May 17 00:20:26.648860 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:20:26.655878 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:20:26.658845 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:20:26.874641 waagent[1792]: 2025-05-17T00:20:26.874445Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 May 17 00:20:26.893849 waagent[1792]: 2025-05-17T00:20:26.878307Z INFO Daemon Daemon OS: flatcar 4081.3.3 May 17 00:20:26.893849 waagent[1792]: 2025-05-17T00:20:26.878578Z INFO Daemon Daemon Python: 3.11.9 May 17 00:20:26.893849 waagent[1792]: 2025-05-17T00:20:26.879465Z INFO Daemon Daemon Run daemon May 17 00:20:26.893849 waagent[1792]: 2025-05-17T00:20:26.879794Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.3' May 17 00:20:26.893849 waagent[1792]: 2025-05-17T00:20:26.880717Z INFO Daemon Daemon Using waagent for provisioning May 17 00:20:26.893849 waagent[1792]: 2025-05-17T00:20:26.881375Z INFO Daemon Daemon Activate resource disk May 17 00:20:26.893849 waagent[1792]: 2025-05-17T00:20:26.881672Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 17 00:20:26.893849 waagent[1792]: 2025-05-17T00:20:26.885741Z INFO Daemon Daemon Found device: None May 17 00:20:26.893849 waagent[1792]: 2025-05-17T00:20:26.886632Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 17 00:20:26.893849 waagent[1792]: 2025-05-17T00:20:26.887131Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 17 00:20:26.893849 waagent[1792]: 2025-05-17T00:20:26.889608Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 17 00:20:26.893849 waagent[1792]: 2025-05-17T00:20:26.890693Z INFO Daemon Daemon Running default provisioning handler May 17 00:20:26.934788 kubelet[1812]: E0517 00:20:26.934719 1812 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:20:26.938892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:20:26.939090 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:20:26.939618 systemd[1]: kubelet.service: Consumed 1.088s CPU time. May 17 00:20:26.985714 waagent[1792]: 2025-05-17T00:20:26.985542Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 17 00:20:26.994442 waagent[1792]: 2025-05-17T00:20:26.994339Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 17 00:20:27.000733 waagent[1792]: 2025-05-17T00:20:26.997184Z INFO Daemon Daemon cloud-init is enabled: False May 17 00:20:27.008649 waagent[1792]: 2025-05-17T00:20:27.008517Z INFO Daemon Daemon Copying ovf-env.xml May 17 00:20:27.057019 waagent[1792]: 2025-05-17T00:20:27.053261Z INFO Daemon Daemon Successfully mounted dvd May 17 00:20:27.073931 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 17 00:20:27.077177 waagent[1792]: 2025-05-17T00:20:27.077077Z INFO Daemon Daemon Detect protocol endpoint May 17 00:20:27.095166 waagent[1792]: 2025-05-17T00:20:27.078692Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 17 00:20:27.095166 waagent[1792]: 2025-05-17T00:20:27.079673Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 17 00:20:27.095166 waagent[1792]: 2025-05-17T00:20:27.080816Z INFO Daemon Daemon Test for route to 168.63.129.16 May 17 00:20:27.095166 waagent[1792]: 2025-05-17T00:20:27.081991Z INFO Daemon Daemon Route to 168.63.129.16 exists May 17 00:20:27.095166 waagent[1792]: 2025-05-17T00:20:27.082918Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 17 00:20:27.100146 waagent[1792]: 2025-05-17T00:20:27.100076Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 17 00:20:27.105598 waagent[1792]: 2025-05-17T00:20:27.105526Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 17 00:20:27.112249 waagent[1792]: 2025-05-17T00:20:27.107661Z INFO Daemon Daemon Server preferred version:2015-04-05 May 17 00:20:27.163715 waagent[1792]: 2025-05-17T00:20:27.163533Z INFO Daemon Daemon Initializing goal state during protocol detection May 17 00:20:27.167603 waagent[1792]: 2025-05-17T00:20:27.167506Z INFO Daemon Daemon Forcing an update of the goal state. May 17 00:20:27.173222 waagent[1792]: 2025-05-17T00:20:27.173135Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 17 00:20:27.191211 waagent[1792]: 2025-05-17T00:20:27.191124Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 17 00:20:27.195526 waagent[1792]: 2025-05-17T00:20:27.195444Z INFO Daemon May 17 00:20:27.197363 waagent[1792]: 2025-05-17T00:20:27.197275Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: f0f1a224-cc3d-44c0-b5d3-5f4404b09dce eTag: 5982789218506948263 source: Fabric] May 17 00:20:27.213235 waagent[1792]: 2025-05-17T00:20:27.203065Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 17 00:20:27.213235 waagent[1792]: 2025-05-17T00:20:27.209693Z INFO Daemon May 17 00:20:27.213414 waagent[1792]: 2025-05-17T00:20:27.213286Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 17 00:20:27.236598 waagent[1792]: 2025-05-17T00:20:27.236540Z INFO Daemon Daemon Downloading artifacts profile blob May 17 00:20:27.320742 waagent[1792]: 2025-05-17T00:20:27.320635Z INFO Daemon Downloaded certificate {'thumbprint': '350236516E30CD13CE4A08928294AC903E1A24B2', 'hasPrivateKey': True} May 17 00:20:27.327319 waagent[1792]: 2025-05-17T00:20:27.327236Z INFO Daemon Fetch goal state completed May 17 00:20:27.337573 waagent[1792]: 2025-05-17T00:20:27.337503Z INFO Daemon Daemon Starting provisioning May 17 00:20:27.340430 waagent[1792]: 2025-05-17T00:20:27.340342Z INFO Daemon Daemon Handle ovf-env.xml. May 17 00:20:27.345466 waagent[1792]: 2025-05-17T00:20:27.341511Z INFO Daemon Daemon Set hostname [ci-4081.3.3-n-4e81e33f0f] May 17 00:20:27.347379 waagent[1792]: 2025-05-17T00:20:27.347295Z INFO Daemon Daemon Publish hostname [ci-4081.3.3-n-4e81e33f0f] May 17 00:20:27.355620 waagent[1792]: 2025-05-17T00:20:27.349170Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 17 00:20:27.355620 waagent[1792]: 2025-05-17T00:20:27.350109Z INFO Daemon Daemon Primary interface is [eth0] May 17 00:20:27.365003 systemd-networkd[1543]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:20:27.365013 systemd-networkd[1543]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:20:27.365067 systemd-networkd[1543]: eth0: DHCP lease lost May 17 00:20:27.366483 waagent[1792]: 2025-05-17T00:20:27.366375Z INFO Daemon Daemon Create user account if not exists May 17 00:20:27.368302 waagent[1792]: 2025-05-17T00:20:27.367937Z INFO Daemon Daemon User core already exists, skip useradd May 17 00:20:27.368825 waagent[1792]: 2025-05-17T00:20:27.368777Z INFO Daemon Daemon Configure sudoer May 17 00:20:27.370120 waagent[1792]: 2025-05-17T00:20:27.370060Z INFO Daemon Daemon Configure sshd May 17 00:20:27.371464 waagent[1792]: 2025-05-17T00:20:27.371414Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 17 00:20:27.372202 waagent[1792]: 2025-05-17T00:20:27.371690Z INFO Daemon Daemon Deploy ssh public key. May 17 00:20:27.387312 systemd-networkd[1543]: eth0: DHCPv6 lease lost May 17 00:20:27.412283 systemd-networkd[1543]: eth0: DHCPv4 address 10.200.8.41/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 17 00:20:36.965503 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:20:36.973515 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:20:37.106841 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:37.122801 (kubelet)[1883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:20:37.817809 kubelet[1883]: E0517 00:20:37.817741 1883 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:20:37.822015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:20:37.822285 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:20:47.843625 chronyd[1663]: Selected source PHC0 May 17 00:20:47.965629 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:20:47.972648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:20:48.105177 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:48.114659 (kubelet)[1897]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:20:48.809119 kubelet[1897]: E0517 00:20:48.809047 1897 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:20:48.812154 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:20:48.812403 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:20:57.450469 waagent[1792]: 2025-05-17T00:20:57.450397Z INFO Daemon Daemon Provisioning complete May 17 00:20:57.466008 waagent[1792]: 2025-05-17T00:20:57.465919Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 17 00:20:57.470404 waagent[1792]: 2025-05-17T00:20:57.469551Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 17 00:20:57.480556 waagent[1792]: 2025-05-17T00:20:57.472643Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent May 17 00:20:57.627025 waagent[1905]: 2025-05-17T00:20:57.626898Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) May 17 00:20:57.627667 waagent[1905]: 2025-05-17T00:20:57.627114Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.3 May 17 00:20:57.627667 waagent[1905]: 2025-05-17T00:20:57.627223Z INFO ExtHandler ExtHandler Python: 3.11.9 May 17 00:20:57.642774 waagent[1905]: 2025-05-17T00:20:57.642661Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.3; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 17 00:20:57.643040 waagent[1905]: 2025-05-17T00:20:57.642977Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:20:57.643134 waagent[1905]: 2025-05-17T00:20:57.643098Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:20:57.652281 waagent[1905]: 2025-05-17T00:20:57.652166Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 17 00:20:57.675155 waagent[1905]: 2025-05-17T00:20:57.675063Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 17 00:20:57.675953 waagent[1905]: 2025-05-17T00:20:57.675871Z INFO ExtHandler May 17 00:20:57.676096 waagent[1905]: 2025-05-17T00:20:57.676011Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ce4492c5-542c-479d-a754-ab50637a99e2 eTag: 5982789218506948263 source: Fabric] May 17 00:20:57.676553 waagent[1905]: 2025-05-17T00:20:57.676483Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 17 00:20:57.677340 waagent[1905]: 2025-05-17T00:20:57.677267Z INFO ExtHandler May 17 00:20:57.677463 waagent[1905]: 2025-05-17T00:20:57.677383Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 17 00:20:57.682704 waagent[1905]: 2025-05-17T00:20:57.682632Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 17 00:20:57.769872 waagent[1905]: 2025-05-17T00:20:57.769749Z INFO ExtHandler Downloaded certificate {'thumbprint': '350236516E30CD13CE4A08928294AC903E1A24B2', 'hasPrivateKey': True} May 17 00:20:57.770572 waagent[1905]: 2025-05-17T00:20:57.770502Z INFO ExtHandler Fetch goal state completed May 17 00:20:57.784984 waagent[1905]: 2025-05-17T00:20:57.784888Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1905 May 17 00:20:57.785232 waagent[1905]: 2025-05-17T00:20:57.785119Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 17 00:20:57.786981 waagent[1905]: 2025-05-17T00:20:57.786906Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.3', '', 'Flatcar Container Linux by Kinvolk'] May 17 00:20:57.787424 waagent[1905]: 2025-05-17T00:20:57.787367Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 17 00:20:57.806773 waagent[1905]: 2025-05-17T00:20:57.806714Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 17 00:20:57.807058 waagent[1905]: 2025-05-17T00:20:57.807001Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 17 00:20:57.814630 waagent[1905]: 2025-05-17T00:20:57.814574Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 17 00:20:57.823058 systemd[1]: Reloading requested from client PID 1918 ('systemctl') (unit waagent.service)... May 17 00:20:57.823079 systemd[1]: Reloading... May 17 00:20:57.943363 zram_generator::config[1958]: No configuration found. May 17 00:20:58.073584 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:20:58.159339 systemd[1]: Reloading finished in 335 ms. May 17 00:20:58.188114 waagent[1905]: 2025-05-17T00:20:58.187529Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service May 17 00:20:58.199169 systemd[1]: Reloading requested from client PID 2008 ('systemctl') (unit waagent.service)... May 17 00:20:58.199213 systemd[1]: Reloading... May 17 00:20:58.319267 zram_generator::config[2046]: No configuration found. May 17 00:20:58.446495 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:20:58.537033 systemd[1]: Reloading finished in 337 ms. May 17 00:20:58.567792 waagent[1905]: 2025-05-17T00:20:58.567618Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 17 00:20:58.568006 waagent[1905]: 2025-05-17T00:20:58.567948Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 17 00:20:58.670388 waagent[1905]: 2025-05-17T00:20:58.670090Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 17 00:20:58.671973 waagent[1905]: 2025-05-17T00:20:58.671857Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] May 17 00:20:58.673246 waagent[1905]: 2025-05-17T00:20:58.673112Z INFO ExtHandler ExtHandler Starting env monitor service. May 17 00:20:58.673392 waagent[1905]: 2025-05-17T00:20:58.673326Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:20:58.673531 waagent[1905]: 2025-05-17T00:20:58.673484Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:20:58.673853 waagent[1905]: 2025-05-17T00:20:58.673793Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 17 00:20:58.674106 waagent[1905]: 2025-05-17T00:20:58.674054Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 17 00:20:58.674106 waagent[1905]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 17 00:20:58.674106 waagent[1905]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 May 17 00:20:58.674106 waagent[1905]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 17 00:20:58.674106 waagent[1905]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 17 00:20:58.674106 waagent[1905]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:20:58.674106 waagent[1905]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:20:58.675115 waagent[1905]: 2025-05-17T00:20:58.675036Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 17 00:20:58.675574 waagent[1905]: 2025-05-17T00:20:58.675510Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:20:58.675714 waagent[1905]: 2025-05-17T00:20:58.675631Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 17 00:20:58.675786 waagent[1905]: 2025-05-17T00:20:58.675721Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 17 00:20:58.675983 waagent[1905]: 2025-05-17T00:20:58.675907Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:20:58.676652 waagent[1905]: 2025-05-17T00:20:58.676470Z INFO EnvHandler ExtHandler Configure routes May 17 00:20:58.676652 waagent[1905]: 2025-05-17T00:20:58.676587Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 17 00:20:58.676772 waagent[1905]: 2025-05-17T00:20:58.676657Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 17 00:20:58.677651 waagent[1905]: 2025-05-17T00:20:58.677257Z INFO EnvHandler ExtHandler Gateway:None May 17 00:20:58.677651 waagent[1905]: 2025-05-17T00:20:58.677346Z INFO EnvHandler ExtHandler Routes:None May 17 00:20:58.677906 waagent[1905]: 2025-05-17T00:20:58.677863Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 17 00:20:58.685182 waagent[1905]: 2025-05-17T00:20:58.685118Z INFO ExtHandler ExtHandler May 17 00:20:58.685314 waagent[1905]: 2025-05-17T00:20:58.685269Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 599a3d1e-b359-40d5-96e7-b980f7c0fbff correlation 4fc98d41-1cb7-475d-a33a-44a36c879afa created: 2025-05-17T00:19:01.303535Z] May 17 00:20:58.686030 waagent[1905]: 2025-05-17T00:20:58.685965Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 17 00:20:58.687219 waagent[1905]: 2025-05-17T00:20:58.686838Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] May 17 00:20:58.698549 waagent[1905]: 2025-05-17T00:20:58.698434Z INFO MonitorHandler ExtHandler Network interfaces: May 17 00:20:58.698549 waagent[1905]: Executing ['ip', '-a', '-o', 'link']: May 17 00:20:58.698549 waagent[1905]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 17 00:20:58.698549 waagent[1905]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2d:de:81 brd ff:ff:ff:ff:ff:ff May 17 00:20:58.698549 waagent[1905]: 3: enP32781s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2d:de:81 brd ff:ff:ff:ff:ff:ff\ altname enP32781p0s2 May 17 00:20:58.698549 waagent[1905]: Executing ['ip', '-4', '-a', '-o', 'address']: May 17 00:20:58.698549 waagent[1905]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 17 00:20:58.698549 waagent[1905]: 2: eth0 inet 10.200.8.41/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever May 17 00:20:58.698549 waagent[1905]: Executing ['ip', '-6', '-a', '-o', 'address']: May 17 00:20:58.698549 waagent[1905]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 17 00:20:58.698549 waagent[1905]: 2: eth0 inet6 fe80::7eed:8dff:fe2d:de81/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 17 00:20:58.698549 waagent[1905]: 3: enP32781s1 inet6 fe80::7eed:8dff:fe2d:de81/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 17 00:20:58.731212 waagent[1905]: 2025-05-17T00:20:58.731032Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C32711C5-737A-4ACF-AA5B-5A879F5DDD5C;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] May 17 00:20:58.752648 waagent[1905]: 2025-05-17T00:20:58.752540Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: May 17 00:20:58.752648 waagent[1905]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:20:58.752648 waagent[1905]: pkts bytes target prot opt in out source destination May 17 00:20:58.752648 waagent[1905]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 17 00:20:58.752648 waagent[1905]: pkts bytes target prot opt in out source destination May 17 00:20:58.752648 waagent[1905]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:20:58.752648 waagent[1905]: pkts bytes target prot opt in out source destination May 17 00:20:58.752648 waagent[1905]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 17 00:20:58.752648 waagent[1905]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 17 00:20:58.752648 waagent[1905]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 17 00:20:58.756468 waagent[1905]: 2025-05-17T00:20:58.756393Z INFO EnvHandler ExtHandler Current Firewall rules: May 17 00:20:58.756468 waagent[1905]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:20:58.756468 waagent[1905]: pkts bytes target prot opt in out source destination May 17 00:20:58.756468 waagent[1905]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 17 00:20:58.756468 waagent[1905]: pkts bytes target prot opt in out source destination May 17 00:20:58.756468 waagent[1905]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:20:58.756468 waagent[1905]: pkts bytes target prot opt in out source destination May 17 00:20:58.756468 waagent[1905]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 17 00:20:58.756468 waagent[1905]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 17 00:20:58.756468 waagent[1905]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 17 00:20:58.756909 waagent[1905]: 2025-05-17T00:20:58.756770Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 17 00:20:58.966143 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:20:58.972608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:20:59.104083 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:59.117618 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:20:59.811851 kubelet[2142]: E0517 00:20:59.811731 2142 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:20:59.814758 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:20:59.814979 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:21:06.661575 kernel: hv_balloon: Max. dynamic memory size: 8192 MB May 17 00:21:09.398384 update_engine[1666]: I20250517 00:21:09.398257 1666 update_attempter.cc:509] Updating boot flags... May 17 00:21:09.463233 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2161) May 17 00:21:09.589237 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2163) May 17 00:21:09.965735 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 17 00:21:09.972470 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:10.100696 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:10.106522 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:21:10.152007 kubelet[2223]: E0517 00:21:10.151897 2223 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:21:10.155168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:21:10.155423 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:21:18.381992 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:21:18.391503 systemd[1]: Started sshd@0-10.200.8.41:22-10.200.16.10:45294.service - OpenSSH per-connection server daemon (10.200.16.10:45294). May 17 00:21:19.029818 sshd[2231]: Accepted publickey for core from 10.200.16.10 port 45294 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:21:19.031762 sshd[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:19.037674 systemd-logind[1665]: New session 3 of user core. May 17 00:21:19.043413 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:21:19.595596 systemd[1]: Started sshd@1-10.200.8.41:22-10.200.16.10:37026.service - OpenSSH per-connection server daemon (10.200.16.10:37026). May 17 00:21:20.215505 sshd[2236]: Accepted publickey for core from 10.200.16.10 port 37026 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:21:20.215678 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 17 00:21:20.218880 sshd[2236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:20.224558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:20.234905 systemd-logind[1665]: New session 4 of user core. May 17 00:21:20.235503 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:21:20.353429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:20.356504 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:21:20.399334 kubelet[2247]: E0517 00:21:20.399224 2247 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:21:20.401912 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:21:20.402119 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:21:20.791332 systemd[1]: Started sshd@2-10.200.8.41:22-10.200.16.10:37038.service - OpenSSH per-connection server daemon (10.200.16.10:37038). May 17 00:21:20.979184 sshd[2236]: pam_unix(sshd:session): session closed for user core May 17 00:21:20.985580 systemd-logind[1665]: Session 4 logged out. Waiting for processes to exit. May 17 00:21:20.987763 systemd[1]: sshd@1-10.200.8.41:22-10.200.16.10:37026.service: Deactivated successfully. May 17 00:21:20.990048 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:21:20.991128 systemd-logind[1665]: Removed session 4. May 17 00:21:21.414736 sshd[2256]: Accepted publickey for core from 10.200.16.10 port 37038 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:21:21.416862 sshd[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:21.421677 systemd-logind[1665]: New session 5 of user core. May 17 00:21:21.424405 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:21:21.853697 sshd[2256]: pam_unix(sshd:session): session closed for user core May 17 00:21:21.858357 systemd[1]: sshd@2-10.200.8.41:22-10.200.16.10:37038.service: Deactivated successfully. May 17 00:21:21.860470 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:21:21.861365 systemd-logind[1665]: Session 5 logged out. Waiting for processes to exit. May 17 00:21:21.862505 systemd-logind[1665]: Removed session 5. May 17 00:21:21.968357 systemd[1]: Started sshd@3-10.200.8.41:22-10.200.16.10:37042.service - OpenSSH per-connection server daemon (10.200.16.10:37042). May 17 00:21:22.592765 sshd[2265]: Accepted publickey for core from 10.200.16.10 port 37042 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:21:22.594653 sshd[2265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:22.599097 systemd-logind[1665]: New session 6 of user core. May 17 00:21:22.610544 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:21:23.039935 sshd[2265]: pam_unix(sshd:session): session closed for user core May 17 00:21:23.043750 systemd[1]: sshd@3-10.200.8.41:22-10.200.16.10:37042.service: Deactivated successfully. May 17 00:21:23.046059 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:21:23.047817 systemd-logind[1665]: Session 6 logged out. Waiting for processes to exit. May 17 00:21:23.049026 systemd-logind[1665]: Removed session 6. May 17 00:21:23.150754 systemd[1]: Started sshd@4-10.200.8.41:22-10.200.16.10:37048.service - OpenSSH per-connection server daemon (10.200.16.10:37048). May 17 00:21:23.783894 sshd[2272]: Accepted publickey for core from 10.200.16.10 port 37048 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:21:23.785640 sshd[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:23.789860 systemd-logind[1665]: New session 7 of user core. May 17 00:21:23.797416 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:21:24.213878 sudo[2275]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:21:24.214573 sudo[2275]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:21:24.230799 sudo[2275]: pam_unix(sudo:session): session closed for user root May 17 00:21:24.332900 sshd[2272]: pam_unix(sshd:session): session closed for user core May 17 00:21:24.336848 systemd[1]: sshd@4-10.200.8.41:22-10.200.16.10:37048.service: Deactivated successfully. May 17 00:21:24.339403 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:21:24.341338 systemd-logind[1665]: Session 7 logged out. Waiting for processes to exit. May 17 00:21:24.342630 systemd-logind[1665]: Removed session 7. May 17 00:21:24.446530 systemd[1]: Started sshd@5-10.200.8.41:22-10.200.16.10:37056.service - OpenSSH per-connection server daemon (10.200.16.10:37056). May 17 00:21:25.080521 sshd[2280]: Accepted publickey for core from 10.200.16.10 port 37056 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:21:25.082168 sshd[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:25.086972 systemd-logind[1665]: New session 8 of user core. May 17 00:21:25.094442 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:21:25.424878 sudo[2284]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:21:25.425279 sudo[2284]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:21:25.429288 sudo[2284]: pam_unix(sudo:session): session closed for user root May 17 00:21:25.435025 sudo[2283]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:21:25.435467 sudo[2283]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:21:25.448588 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:21:25.452456 auditctl[2287]: No rules May 17 00:21:25.452887 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:21:25.453107 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:21:25.456058 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:21:25.489689 augenrules[2305]: No rules May 17 00:21:25.491280 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:21:25.492782 sudo[2283]: pam_unix(sudo:session): session closed for user root May 17 00:21:25.612579 sshd[2280]: pam_unix(sshd:session): session closed for user core May 17 00:21:25.615790 systemd[1]: sshd@5-10.200.8.41:22-10.200.16.10:37056.service: Deactivated successfully. May 17 00:21:25.617869 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:21:25.619674 systemd-logind[1665]: Session 8 logged out. Waiting for processes to exit. May 17 00:21:25.620641 systemd-logind[1665]: Removed session 8. May 17 00:21:25.721751 systemd[1]: Started sshd@6-10.200.8.41:22-10.200.16.10:37068.service - OpenSSH per-connection server daemon (10.200.16.10:37068). May 17 00:21:26.347768 sshd[2313]: Accepted publickey for core from 10.200.16.10 port 37068 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:21:26.349651 sshd[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:26.355378 systemd-logind[1665]: New session 9 of user core. May 17 00:21:26.364432 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:21:26.693301 sudo[2316]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:21:26.693683 sudo[2316]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:21:27.246571 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:21:27.249123 (dockerd)[2331]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:21:27.805509 dockerd[2331]: time="2025-05-17T00:21:27.805440139Z" level=info msg="Starting up" May 17 00:21:28.068938 dockerd[2331]: time="2025-05-17T00:21:28.068522785Z" level=info msg="Loading containers: start." May 17 00:21:28.202486 kernel: Initializing XFRM netlink socket May 17 00:21:28.277595 systemd-networkd[1543]: docker0: Link UP May 17 00:21:28.311715 dockerd[2331]: time="2025-05-17T00:21:28.311660500Z" level=info msg="Loading containers: done." May 17 00:21:28.396281 dockerd[2331]: time="2025-05-17T00:21:28.396072408Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:21:28.396478 dockerd[2331]: time="2025-05-17T00:21:28.396310410Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:21:28.396528 dockerd[2331]: time="2025-05-17T00:21:28.396477812Z" level=info msg="Daemon has completed initialization" May 17 00:21:28.458872 dockerd[2331]: time="2025-05-17T00:21:28.458755882Z" level=info msg="API listen on /run/docker.sock" May 17 00:21:28.459002 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:21:29.326103 containerd[1701]: time="2025-05-17T00:21:29.325668406Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 17 00:21:30.067664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount140213809.mount: Deactivated successfully. May 17 00:21:30.466116 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 17 00:21:30.477586 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:30.653664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:30.666716 (kubelet)[2495]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:21:30.710210 kubelet[2495]: E0517 00:21:30.710074 2495 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:21:30.712847 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:21:30.713059 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:21:32.573049 containerd[1701]: time="2025-05-17T00:21:32.572953331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:32.579883 containerd[1701]: time="2025-05-17T00:21:32.579727104Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=30075411" May 17 00:21:32.583451 containerd[1701]: time="2025-05-17T00:21:32.583348743Z" level=info msg="ImageCreate event name:\"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:32.589997 containerd[1701]: time="2025-05-17T00:21:32.589871113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:32.592164 containerd[1701]: time="2025-05-17T00:21:32.591278829Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"30072203\" in 3.265549822s" May 17 00:21:32.592164 containerd[1701]: time="2025-05-17T00:21:32.591361429Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\"" May 17 00:21:32.592729 containerd[1701]: time="2025-05-17T00:21:32.592685944Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 17 00:21:34.187562 containerd[1701]: time="2025-05-17T00:21:34.187492996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:34.190052 containerd[1701]: time="2025-05-17T00:21:34.189956023Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=26011398" May 17 00:21:34.194679 containerd[1701]: time="2025-05-17T00:21:34.194156968Z" level=info msg="ImageCreate event name:\"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:34.203164 containerd[1701]: time="2025-05-17T00:21:34.203079464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:34.207000 containerd[1701]: time="2025-05-17T00:21:34.206867605Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"27638910\" in 1.614122661s" May 17 00:21:34.208144 containerd[1701]: time="2025-05-17T00:21:34.207524312Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\"" May 17 00:21:34.208899 containerd[1701]: time="2025-05-17T00:21:34.208833826Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 17 00:21:35.647964 containerd[1701]: time="2025-05-17T00:21:35.647870303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:35.651218 containerd[1701]: time="2025-05-17T00:21:35.651074538Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=20148968" May 17 00:21:35.655288 containerd[1701]: time="2025-05-17T00:21:35.655151582Z" level=info msg="ImageCreate event name:\"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:35.662464 containerd[1701]: time="2025-05-17T00:21:35.662340659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:35.663883 containerd[1701]: time="2025-05-17T00:21:35.663665973Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"21776498\" in 1.454744746s" May 17 00:21:35.663883 containerd[1701]: time="2025-05-17T00:21:35.663734874Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\"" May 17 00:21:35.665267 containerd[1701]: time="2025-05-17T00:21:35.664880786Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 17 00:21:36.869842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1880871332.mount: Deactivated successfully. May 17 00:21:37.465488 containerd[1701]: time="2025-05-17T00:21:37.465402661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:37.468950 containerd[1701]: time="2025-05-17T00:21:37.468843391Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=31889083" May 17 00:21:37.474039 containerd[1701]: time="2025-05-17T00:21:37.473965436Z" level=info msg="ImageCreate event name:\"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:37.479273 containerd[1701]: time="2025-05-17T00:21:37.479128881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:37.480546 containerd[1701]: time="2025-05-17T00:21:37.479958388Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"31888094\" in 1.815027301s" May 17 00:21:37.480546 containerd[1701]: time="2025-05-17T00:21:37.480021188Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 17 00:21:37.481076 containerd[1701]: time="2025-05-17T00:21:37.481038297Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 17 00:21:38.035592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3863875959.mount: Deactivated successfully. May 17 00:21:39.495645 containerd[1701]: time="2025-05-17T00:21:39.495587552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:39.497556 containerd[1701]: time="2025-05-17T00:21:39.497496169Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" May 17 00:21:39.501020 containerd[1701]: time="2025-05-17T00:21:39.500968999Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:39.505145 containerd[1701]: time="2025-05-17T00:21:39.505096435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:39.506351 containerd[1701]: time="2025-05-17T00:21:39.506177645Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.025069247s" May 17 00:21:39.506351 containerd[1701]: time="2025-05-17T00:21:39.506235945Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" May 17 00:21:39.507311 containerd[1701]: time="2025-05-17T00:21:39.507286154Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:21:39.977898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1072942264.mount: Deactivated successfully. May 17 00:21:40.000406 containerd[1701]: time="2025-05-17T00:21:40.000352551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:40.003182 containerd[1701]: time="2025-05-17T00:21:40.003133375Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 17 00:21:40.007851 containerd[1701]: time="2025-05-17T00:21:40.007796716Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:40.012883 containerd[1701]: time="2025-05-17T00:21:40.012831160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:40.013598 containerd[1701]: time="2025-05-17T00:21:40.013561066Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 506.243311ms" May 17 00:21:40.013689 containerd[1701]: time="2025-05-17T00:21:40.013603067Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:21:40.014455 containerd[1701]: time="2025-05-17T00:21:40.014424574Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 17 00:21:40.715411 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 17 00:21:40.725581 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:40.877417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:40.880378 (kubelet)[2623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:21:41.559164 kubelet[2623]: E0517 00:21:41.559108 2623 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:21:41.561584 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:21:41.561799 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:21:43.096529 containerd[1701]: time="2025-05-17T00:21:43.096469931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:43.098719 containerd[1701]: time="2025-05-17T00:21:43.098499549Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142747" May 17 00:21:43.129585 containerd[1701]: time="2025-05-17T00:21:43.129504719Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:43.135647 containerd[1701]: time="2025-05-17T00:21:43.134271961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:43.135647 containerd[1701]: time="2025-05-17T00:21:43.135490971Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.121032997s" May 17 00:21:43.135647 containerd[1701]: time="2025-05-17T00:21:43.135529872Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 17 00:21:45.809071 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:45.821567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:45.860242 systemd[1]: Reloading requested from client PID 2665 ('systemctl') (unit session-9.scope)... May 17 00:21:45.860266 systemd[1]: Reloading... May 17 00:21:46.013959 zram_generator::config[2708]: No configuration found. May 17 00:21:46.137000 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:21:46.218617 systemd[1]: Reloading finished in 357 ms. May 17 00:21:46.273469 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:21:46.273572 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:21:46.273886 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:46.279659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:46.667260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:46.677588 (kubelet)[2774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:21:47.462236 kubelet[2774]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:21:47.462236 kubelet[2774]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:21:47.462236 kubelet[2774]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:21:47.462236 kubelet[2774]: I0517 00:21:47.461821 2774 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:21:47.793113 kubelet[2774]: I0517 00:21:47.793061 2774 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 17 00:21:47.793113 kubelet[2774]: I0517 00:21:47.793094 2774 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:21:47.793523 kubelet[2774]: I0517 00:21:47.793497 2774 server.go:956] "Client rotation is on, will bootstrap in background" May 17 00:21:47.820063 kubelet[2774]: I0517 00:21:47.819565 2774 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:21:47.820063 kubelet[2774]: E0517 00:21:47.819949 2774 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 17 00:21:47.826846 kubelet[2774]: E0517 00:21:47.826795 2774 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:21:47.826846 kubelet[2774]: I0517 00:21:47.826843 2774 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:21:47.830851 kubelet[2774]: I0517 00:21:47.830818 2774 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:21:47.831154 kubelet[2774]: I0517 00:21:47.831096 2774 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:21:47.831383 kubelet[2774]: I0517 00:21:47.831146 2774 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-n-4e81e33f0f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:21:47.831533 kubelet[2774]: I0517 00:21:47.831391 2774 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:21:47.831533 kubelet[2774]: I0517 00:21:47.831405 2774 container_manager_linux.go:303] "Creating device plugin manager" May 17 00:21:47.831619 kubelet[2774]: I0517 00:21:47.831572 2774 state_mem.go:36] "Initialized new in-memory state store" May 17 00:21:47.833653 kubelet[2774]: I0517 00:21:47.833626 2774 kubelet.go:480] "Attempting to sync node with API server" May 17 00:21:47.833761 kubelet[2774]: I0517 00:21:47.833663 2774 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:21:47.833761 kubelet[2774]: I0517 00:21:47.833702 2774 kubelet.go:386] "Adding apiserver pod source" May 17 00:21:47.833761 kubelet[2774]: I0517 00:21:47.833720 2774 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:21:47.843290 kubelet[2774]: E0517 00:21:47.842696 2774 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 17 00:21:47.843290 kubelet[2774]: E0517 00:21:47.842809 2774 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-4e81e33f0f&limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 17 00:21:47.843662 kubelet[2774]: I0517 00:21:47.843642 2774 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:21:47.844443 kubelet[2774]: I0517 00:21:47.844415 2774 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 17 00:21:47.846238 kubelet[2774]: W0517 00:21:47.845181 2774 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:21:47.848707 kubelet[2774]: I0517 00:21:47.848686 2774 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:21:47.848875 kubelet[2774]: I0517 00:21:47.848866 2774 server.go:1289] "Started kubelet" May 17 00:21:47.855642 kubelet[2774]: I0517 00:21:47.855596 2774 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:21:47.859233 kubelet[2774]: I0517 00:21:47.857880 2774 server.go:317] "Adding debug handlers to kubelet server" May 17 00:21:47.861707 kubelet[2774]: E0517 00:21:47.860008 2774 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.41:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.41:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-n-4e81e33f0f.1840289c742c64ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-n-4e81e33f0f,UID:ci-4081.3.3-n-4e81e33f0f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-n-4e81e33f0f,},FirstTimestamp:2025-05-17 00:21:47.84882811 +0000 UTC m=+1.166925461,LastTimestamp:2025-05-17 00:21:47.84882811 +0000 UTC m=+1.166925461,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-n-4e81e33f0f,}" May 17 00:21:47.864526 kubelet[2774]: E0517 00:21:47.864459 2774 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:21:47.864935 kubelet[2774]: I0517 00:21:47.864875 2774 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:21:47.865321 kubelet[2774]: I0517 00:21:47.865296 2774 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:21:47.867276 kubelet[2774]: I0517 00:21:47.867250 2774 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:21:47.868563 kubelet[2774]: I0517 00:21:47.868525 2774 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:21:47.868747 kubelet[2774]: I0517 00:21:47.868720 2774 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:21:47.871284 kubelet[2774]: I0517 00:21:47.871260 2774 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:21:47.871370 kubelet[2774]: I0517 00:21:47.871339 2774 reconciler.go:26] "Reconciler: start to sync state" May 17 00:21:47.872331 kubelet[2774]: E0517 00:21:47.872297 2774 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 17 00:21:47.873713 kubelet[2774]: E0517 00:21:47.873657 2774 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-4e81e33f0f\" not found" May 17 00:21:47.874340 kubelet[2774]: I0517 00:21:47.874311 2774 factory.go:223] Registration of the systemd container factory successfully May 17 00:21:47.874493 kubelet[2774]: I0517 00:21:47.874462 2774 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:21:47.875155 kubelet[2774]: E0517 00:21:47.875112 2774 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-4e81e33f0f?timeout=10s\": dial tcp 10.200.8.41:6443: connect: connection refused" interval="200ms" May 17 00:21:47.876788 kubelet[2774]: I0517 00:21:47.876602 2774 factory.go:223] Registration of the containerd container factory successfully May 17 00:21:47.925996 kubelet[2774]: I0517 00:21:47.925933 2774 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 17 00:21:47.928281 kubelet[2774]: I0517 00:21:47.927934 2774 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 17 00:21:47.928281 kubelet[2774]: I0517 00:21:47.927984 2774 status_manager.go:230] "Starting to sync pod status with apiserver" May 17 00:21:47.928281 kubelet[2774]: I0517 00:21:47.928049 2774 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:21:47.928281 kubelet[2774]: I0517 00:21:47.928062 2774 kubelet.go:2436] "Starting kubelet main sync loop" May 17 00:21:47.928281 kubelet[2774]: E0517 00:21:47.928147 2774 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:21:47.930580 kubelet[2774]: E0517 00:21:47.930327 2774 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 17 00:21:47.935842 kubelet[2774]: I0517 00:21:47.935812 2774 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:21:47.935842 kubelet[2774]: I0517 00:21:47.935838 2774 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:21:47.936015 kubelet[2774]: I0517 00:21:47.935864 2774 state_mem.go:36] "Initialized new in-memory state store" May 17 00:21:47.947446 kubelet[2774]: I0517 00:21:47.947393 2774 policy_none.go:49] "None policy: Start" May 17 00:21:47.947446 kubelet[2774]: I0517 00:21:47.947435 2774 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:21:47.947625 kubelet[2774]: I0517 00:21:47.947476 2774 state_mem.go:35] "Initializing new in-memory state store" May 17 00:21:47.962157 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:21:47.974245 kubelet[2774]: E0517 00:21:47.974024 2774 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-4e81e33f0f\" not found" May 17 00:21:47.975125 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:21:47.978856 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:21:47.986116 kubelet[2774]: E0517 00:21:47.986079 2774 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 17 00:21:47.986116 kubelet[2774]: I0517 00:21:47.986369 2774 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:21:47.986116 kubelet[2774]: I0517 00:21:47.986397 2774 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:21:47.986873 kubelet[2774]: I0517 00:21:47.986708 2774 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:21:47.989873 kubelet[2774]: E0517 00:21:47.989844 2774 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:21:47.990071 kubelet[2774]: E0517 00:21:47.989898 2774 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-n-4e81e33f0f\" not found" May 17 00:21:48.044826 systemd[1]: Created slice kubepods-burstable-pod07414b0a609b600d5e9b4404ff7d6ded.slice - libcontainer container kubepods-burstable-pod07414b0a609b600d5e9b4404ff7d6ded.slice. May 17 00:21:48.054274 kubelet[2774]: E0517 00:21:48.053909 2774 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-4e81e33f0f\" not found" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:48.060478 systemd[1]: Created slice kubepods-burstable-pod65b1814beeb3d0ec7bdcea896bcc5e1e.slice - libcontainer container kubepods-burstable-pod65b1814beeb3d0ec7bdcea896bcc5e1e.slice. May 17 00:21:48.062925 kubelet[2774]: E0517 00:21:48.062895 2774 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-4e81e33f0f\" not found" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:48.064841 systemd[1]: Created slice kubepods-burstable-pod379ddfc7d06dc0cf08475ab72deb3b94.slice - libcontainer container kubepods-burstable-pod379ddfc7d06dc0cf08475ab72deb3b94.slice. May 17 00:21:48.066910 kubelet[2774]: E0517 00:21:48.066882 2774 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-4e81e33f0f\" not found" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:48.073059 kubelet[2774]: I0517 00:21:48.073000 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07414b0a609b600d5e9b4404ff7d6ded-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-n-4e81e33f0f\" (UID: \"07414b0a609b600d5e9b4404ff7d6ded\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:48.073059 kubelet[2774]: I0517 00:21:48.073057 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07414b0a609b600d5e9b4404ff7d6ded-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-n-4e81e33f0f\" (UID: \"07414b0a609b600d5e9b4404ff7d6ded\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:48.073245 kubelet[2774]: I0517 00:21:48.073079 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65b1814beeb3d0ec7bdcea896bcc5e1e-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-4e81e33f0f\" (UID: \"65b1814beeb3d0ec7bdcea896bcc5e1e\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:48.073245 kubelet[2774]: I0517 00:21:48.073100 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65b1814beeb3d0ec7bdcea896bcc5e1e-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-4e81e33f0f\" (UID: \"65b1814beeb3d0ec7bdcea896bcc5e1e\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:48.073245 kubelet[2774]: I0517 00:21:48.073123 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65b1814beeb3d0ec7bdcea896bcc5e1e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-n-4e81e33f0f\" (UID: \"65b1814beeb3d0ec7bdcea896bcc5e1e\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:48.073245 kubelet[2774]: I0517 00:21:48.073155 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07414b0a609b600d5e9b4404ff7d6ded-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-n-4e81e33f0f\" (UID: \"07414b0a609b600d5e9b4404ff7d6ded\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:48.073245 kubelet[2774]: I0517 00:21:48.073174 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/65b1814beeb3d0ec7bdcea896bcc5e1e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-n-4e81e33f0f\" (UID: \"65b1814beeb3d0ec7bdcea896bcc5e1e\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:48.073428 kubelet[2774]: I0517 00:21:48.073231 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/65b1814beeb3d0ec7bdcea896bcc5e1e-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-n-4e81e33f0f\" (UID: \"65b1814beeb3d0ec7bdcea896bcc5e1e\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:48.073428 kubelet[2774]: I0517 00:21:48.073252 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/379ddfc7d06dc0cf08475ab72deb3b94-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-n-4e81e33f0f\" (UID: \"379ddfc7d06dc0cf08475ab72deb3b94\") " pod="kube-system/kube-scheduler-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:48.076520 kubelet[2774]: E0517 00:21:48.076486 2774 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-4e81e33f0f?timeout=10s\": dial tcp 10.200.8.41:6443: connect: connection refused" interval="400ms" May 17 00:21:48.088643 kubelet[2774]: I0517 00:21:48.088606 2774 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:48.089005 kubelet[2774]: E0517 00:21:48.088976 2774 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.41:6443/api/v1/nodes\": dial tcp 10.200.8.41:6443: connect: connection refused" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:48.292043 kubelet[2774]: I0517 00:21:48.291721 2774 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:48.292377 kubelet[2774]: E0517 00:21:48.292337 2774 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.41:6443/api/v1/nodes\": dial tcp 10.200.8.41:6443: connect: connection refused" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:48.355505 containerd[1701]: time="2025-05-17T00:21:48.355365198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-n-4e81e33f0f,Uid:07414b0a609b600d5e9b4404ff7d6ded,Namespace:kube-system,Attempt:0,}" May 17 00:21:48.364220 containerd[1701]: time="2025-05-17T00:21:48.364106182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-n-4e81e33f0f,Uid:65b1814beeb3d0ec7bdcea896bcc5e1e,Namespace:kube-system,Attempt:0,}" May 17 00:21:48.370203 containerd[1701]: time="2025-05-17T00:21:48.369942839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-n-4e81e33f0f,Uid:379ddfc7d06dc0cf08475ab72deb3b94,Namespace:kube-system,Attempt:0,}" May 17 00:21:48.478324 kubelet[2774]: E0517 00:21:48.478255 2774 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-4e81e33f0f?timeout=10s\": dial tcp 10.200.8.41:6443: connect: connection refused" interval="800ms" May 17 00:21:48.694530 kubelet[2774]: I0517 00:21:48.694417 2774 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:48.694863 kubelet[2774]: E0517 00:21:48.694809 2774 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.41:6443/api/v1/nodes\": dial tcp 10.200.8.41:6443: connect: connection refused" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:48.897881 kubelet[2774]: E0517 00:21:48.897820 2774 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 17 00:21:48.916804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount754530676.mount: Deactivated successfully. May 17 00:21:48.964019 containerd[1701]: time="2025-05-17T00:21:48.963855369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:21:48.967515 containerd[1701]: time="2025-05-17T00:21:48.967459104Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:21:48.972173 containerd[1701]: time="2025-05-17T00:21:48.972109049Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" May 17 00:21:48.975498 containerd[1701]: time="2025-05-17T00:21:48.975442181Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:21:48.979376 containerd[1701]: time="2025-05-17T00:21:48.979324919Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:21:48.985270 containerd[1701]: time="2025-05-17T00:21:48.985221676Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:21:48.988606 containerd[1701]: time="2025-05-17T00:21:48.988505807Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:21:48.998042 containerd[1701]: time="2025-05-17T00:21:48.997958398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:21:48.999516 containerd[1701]: time="2025-05-17T00:21:48.998889207Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 634.653124ms" May 17 00:21:49.000549 containerd[1701]: time="2025-05-17T00:21:49.000506023Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 645.049624ms" May 17 00:21:49.010042 containerd[1701]: time="2025-05-17T00:21:49.009987014Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 639.960875ms" May 17 00:21:49.122080 kubelet[2774]: E0517 00:21:49.121916 2774 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-4e81e33f0f&limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 17 00:21:49.122660 kubelet[2774]: E0517 00:21:49.122339 2774 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 17 00:21:49.279505 kubelet[2774]: E0517 00:21:49.279458 2774 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-4e81e33f0f?timeout=10s\": dial tcp 10.200.8.41:6443: connect: connection refused" interval="1.6s" May 17 00:21:49.363446 containerd[1701]: time="2025-05-17T00:21:49.353558330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:49.363446 containerd[1701]: time="2025-05-17T00:21:49.353622430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:49.363446 containerd[1701]: time="2025-05-17T00:21:49.353661731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:49.363446 containerd[1701]: time="2025-05-17T00:21:49.353763032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:49.366209 containerd[1701]: time="2025-05-17T00:21:49.364098931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:49.366209 containerd[1701]: time="2025-05-17T00:21:49.364223033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:49.366209 containerd[1701]: time="2025-05-17T00:21:49.364264833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:49.366209 containerd[1701]: time="2025-05-17T00:21:49.364719437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:49.371533 containerd[1701]: time="2025-05-17T00:21:49.371421802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:49.372203 containerd[1701]: time="2025-05-17T00:21:49.371563303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:49.372203 containerd[1701]: time="2025-05-17T00:21:49.371588504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:49.372203 containerd[1701]: time="2025-05-17T00:21:49.372068408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:49.397645 systemd[1]: Started cri-containerd-caeb397d56cc45aae625e828ac5e755ade9ce7a507056e1b12b44acc28d168a5.scope - libcontainer container caeb397d56cc45aae625e828ac5e755ade9ce7a507056e1b12b44acc28d168a5. May 17 00:21:49.408256 systemd[1]: Started cri-containerd-ee5f748f9039f9f7aaf3f15d27586a8bfbcdf03f44a2c80d14722843511fe562.scope - libcontainer container ee5f748f9039f9f7aaf3f15d27586a8bfbcdf03f44a2c80d14722843511fe562. May 17 00:21:49.418396 systemd[1]: Started cri-containerd-03bba033e910e679e657265d42e27e9e08ce77bdd3c38d98e713630b4ce0c100.scope - libcontainer container 03bba033e910e679e657265d42e27e9e08ce77bdd3c38d98e713630b4ce0c100. May 17 00:21:49.493387 containerd[1701]: time="2025-05-17T00:21:49.493344578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-n-4e81e33f0f,Uid:379ddfc7d06dc0cf08475ab72deb3b94,Namespace:kube-system,Attempt:0,} returns sandbox id \"caeb397d56cc45aae625e828ac5e755ade9ce7a507056e1b12b44acc28d168a5\"" May 17 00:21:49.501827 kubelet[2774]: I0517 00:21:49.500927 2774 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:49.501827 kubelet[2774]: E0517 00:21:49.501477 2774 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.41:6443/api/v1/nodes\": dial tcp 10.200.8.41:6443: connect: connection refused" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:49.501827 kubelet[2774]: E0517 00:21:49.501545 2774 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 17 00:21:49.509881 containerd[1701]: time="2025-05-17T00:21:49.509833238Z" level=info msg="CreateContainer within sandbox \"caeb397d56cc45aae625e828ac5e755ade9ce7a507056e1b12b44acc28d168a5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:21:49.516843 containerd[1701]: time="2025-05-17T00:21:49.516717204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-n-4e81e33f0f,Uid:65b1814beeb3d0ec7bdcea896bcc5e1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"03bba033e910e679e657265d42e27e9e08ce77bdd3c38d98e713630b4ce0c100\"" May 17 00:21:49.517171 containerd[1701]: time="2025-05-17T00:21:49.516808505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-n-4e81e33f0f,Uid:07414b0a609b600d5e9b4404ff7d6ded,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee5f748f9039f9f7aaf3f15d27586a8bfbcdf03f44a2c80d14722843511fe562\"" May 17 00:21:49.549103 containerd[1701]: time="2025-05-17T00:21:49.547670503Z" level=info msg="CreateContainer within sandbox \"03bba033e910e679e657265d42e27e9e08ce77bdd3c38d98e713630b4ce0c100\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:21:49.557020 containerd[1701]: time="2025-05-17T00:21:49.556812891Z" level=info msg="CreateContainer within sandbox \"ee5f748f9039f9f7aaf3f15d27586a8bfbcdf03f44a2c80d14722843511fe562\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:21:49.628789 containerd[1701]: time="2025-05-17T00:21:49.628743485Z" level=info msg="CreateContainer within sandbox \"03bba033e910e679e657265d42e27e9e08ce77bdd3c38d98e713630b4ce0c100\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"229ecf3ffcce3220884a1e6bbcde0670b6e3c9a6c3df59ee0126e0c894635195\"" May 17 00:21:49.629864 containerd[1701]: time="2025-05-17T00:21:49.629824595Z" level=info msg="StartContainer for \"229ecf3ffcce3220884a1e6bbcde0670b6e3c9a6c3df59ee0126e0c894635195\"" May 17 00:21:49.631369 containerd[1701]: time="2025-05-17T00:21:49.631232109Z" level=info msg="CreateContainer within sandbox \"caeb397d56cc45aae625e828ac5e755ade9ce7a507056e1b12b44acc28d168a5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"50483e4fd17a6531553c7881bd7b3c016a70256e6c89ac73d96bcefcc20d515b\"" May 17 00:21:49.632949 containerd[1701]: time="2025-05-17T00:21:49.631993716Z" level=info msg="StartContainer for \"50483e4fd17a6531553c7881bd7b3c016a70256e6c89ac73d96bcefcc20d515b\"" May 17 00:21:49.659001 containerd[1701]: time="2025-05-17T00:21:49.658942376Z" level=info msg="CreateContainer within sandbox \"ee5f748f9039f9f7aaf3f15d27586a8bfbcdf03f44a2c80d14722843511fe562\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6352d7075297546507ae89d1256459f9d6aa2749e0bccae72b1d986403bf2308\"" May 17 00:21:49.660622 containerd[1701]: time="2025-05-17T00:21:49.659901886Z" level=info msg="StartContainer for \"6352d7075297546507ae89d1256459f9d6aa2749e0bccae72b1d986403bf2308\"" May 17 00:21:49.675748 systemd[1]: Started cri-containerd-229ecf3ffcce3220884a1e6bbcde0670b6e3c9a6c3df59ee0126e0c894635195.scope - libcontainer container 229ecf3ffcce3220884a1e6bbcde0670b6e3c9a6c3df59ee0126e0c894635195. May 17 00:21:49.678331 systemd[1]: Started cri-containerd-50483e4fd17a6531553c7881bd7b3c016a70256e6c89ac73d96bcefcc20d515b.scope - libcontainer container 50483e4fd17a6531553c7881bd7b3c016a70256e6c89ac73d96bcefcc20d515b. May 17 00:21:49.710427 systemd[1]: Started cri-containerd-6352d7075297546507ae89d1256459f9d6aa2749e0bccae72b1d986403bf2308.scope - libcontainer container 6352d7075297546507ae89d1256459f9d6aa2749e0bccae72b1d986403bf2308. May 17 00:21:49.792480 containerd[1701]: time="2025-05-17T00:21:49.792243863Z" level=info msg="StartContainer for \"229ecf3ffcce3220884a1e6bbcde0670b6e3c9a6c3df59ee0126e0c894635195\" returns successfully" May 17 00:21:49.794434 containerd[1701]: time="2025-05-17T00:21:49.794295082Z" level=info msg="StartContainer for \"6352d7075297546507ae89d1256459f9d6aa2749e0bccae72b1d986403bf2308\" returns successfully" May 17 00:21:49.806565 containerd[1701]: time="2025-05-17T00:21:49.806392499Z" level=info msg="StartContainer for \"50483e4fd17a6531553c7881bd7b3c016a70256e6c89ac73d96bcefcc20d515b\" returns successfully" May 17 00:21:49.855556 kubelet[2774]: E0517 00:21:49.855502 2774 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 17 00:21:49.950602 kubelet[2774]: E0517 00:21:49.950565 2774 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-4e81e33f0f\" not found" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:49.958139 kubelet[2774]: E0517 00:21:49.955070 2774 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-4e81e33f0f\" not found" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:49.963592 kubelet[2774]: E0517 00:21:49.963554 2774 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-4e81e33f0f\" not found" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:51.729022 kubelet[2774]: I0517 00:21:51.728977 2774 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:51.734613 kubelet[2774]: E0517 00:21:51.734571 2774 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-4e81e33f0f\" not found" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:51.735182 kubelet[2774]: E0517 00:21:51.735144 2774 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-4e81e33f0f\" not found" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:51.735496 kubelet[2774]: E0517 00:21:51.735471 2774 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-4e81e33f0f\" not found" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:52.433334 kubelet[2774]: E0517 00:21:52.433273 2774 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-n-4e81e33f0f\" not found" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:52.624219 kubelet[2774]: I0517 00:21:52.623112 2774 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:52.624219 kubelet[2774]: E0517 00:21:52.623165 2774 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.3-n-4e81e33f0f\": node \"ci-4081.3.3-n-4e81e33f0f\" not found" May 17 00:21:52.655793 kubelet[2774]: E0517 00:21:52.655747 2774 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-4e81e33f0f\" not found" May 17 00:21:52.681348 kubelet[2774]: I0517 00:21:52.681114 2774 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:52.731735 kubelet[2774]: I0517 00:21:52.731672 2774 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:52.792060 kubelet[2774]: E0517 00:21:52.791964 2774 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-n-4e81e33f0f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:52.792060 kubelet[2774]: I0517 00:21:52.792051 2774 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:52.794158 kubelet[2774]: E0517 00:21:52.792431 2774 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.3-n-4e81e33f0f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:52.794333 kubelet[2774]: E0517 00:21:52.794263 2774 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.3-n-4e81e33f0f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:52.794333 kubelet[2774]: I0517 00:21:52.794292 2774 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:52.796753 kubelet[2774]: E0517 00:21:52.796694 2774 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.3-n-4e81e33f0f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:52.842336 kubelet[2774]: I0517 00:21:52.841164 2774 apiserver.go:52] "Watching apiserver" May 17 00:21:52.871865 kubelet[2774]: I0517 00:21:52.871790 2774 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:21:53.303693 kubelet[2774]: I0517 00:21:53.303372 2774 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:53.312774 kubelet[2774]: I0517 00:21:53.312627 2774 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 00:21:55.112964 systemd[1]: Reloading requested from client PID 3059 ('systemctl') (unit session-9.scope)... May 17 00:21:55.112985 systemd[1]: Reloading... May 17 00:21:55.223349 zram_generator::config[3099]: No configuration found. May 17 00:21:55.353479 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:21:55.452278 systemd[1]: Reloading finished in 338 ms. May 17 00:21:55.495361 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:55.502070 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:21:55.502284 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:55.510909 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:55.821897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:55.835673 (kubelet)[3166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:21:55.883657 kubelet[3166]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:21:55.883657 kubelet[3166]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:21:55.883657 kubelet[3166]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:21:55.884168 kubelet[3166]: I0517 00:21:55.883763 3166 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:21:55.898727 kubelet[3166]: I0517 00:21:55.898681 3166 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 17 00:21:55.898727 kubelet[3166]: I0517 00:21:55.898720 3166 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:21:55.901216 kubelet[3166]: I0517 00:21:55.900590 3166 server.go:956] "Client rotation is on, will bootstrap in background" May 17 00:21:55.902676 kubelet[3166]: I0517 00:21:55.902642 3166 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 17 00:21:55.905847 kubelet[3166]: I0517 00:21:55.905647 3166 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:21:55.909827 kubelet[3166]: E0517 00:21:55.909750 3166 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:21:55.909827 kubelet[3166]: I0517 00:21:55.909822 3166 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:21:55.913519 kubelet[3166]: I0517 00:21:55.913476 3166 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:21:55.913777 kubelet[3166]: I0517 00:21:55.913730 3166 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:21:55.913958 kubelet[3166]: I0517 00:21:55.913771 3166 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-n-4e81e33f0f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:21:55.914082 kubelet[3166]: I0517 00:21:55.913967 3166 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:21:55.914082 kubelet[3166]: I0517 00:21:55.913982 3166 container_manager_linux.go:303] "Creating device plugin manager" May 17 00:21:55.914082 kubelet[3166]: I0517 00:21:55.914044 3166 state_mem.go:36] "Initialized new in-memory state store" May 17 00:21:55.914286 kubelet[3166]: I0517 00:21:55.914269 3166 kubelet.go:480] "Attempting to sync node with API server" May 17 00:21:55.914362 kubelet[3166]: I0517 00:21:55.914298 3166 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:21:55.914362 kubelet[3166]: I0517 00:21:55.914329 3166 kubelet.go:386] "Adding apiserver pod source" May 17 00:21:55.914362 kubelet[3166]: I0517 00:21:55.914343 3166 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:21:55.918210 kubelet[3166]: I0517 00:21:55.916667 3166 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:21:55.918210 kubelet[3166]: I0517 00:21:55.917308 3166 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 17 00:21:55.927258 kubelet[3166]: I0517 00:21:55.925128 3166 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:21:55.927258 kubelet[3166]: I0517 00:21:55.925178 3166 server.go:1289] "Started kubelet" May 17 00:21:55.929114 kubelet[3166]: I0517 00:21:55.928924 3166 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:21:55.944873 kubelet[3166]: I0517 00:21:55.944537 3166 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:21:55.945699 kubelet[3166]: I0517 00:21:55.945661 3166 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:21:55.948459 kubelet[3166]: I0517 00:21:55.948436 3166 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:21:55.952074 kubelet[3166]: I0517 00:21:55.952047 3166 server.go:317] "Adding debug handlers to kubelet server" May 17 00:21:55.952267 kubelet[3166]: I0517 00:21:55.952251 3166 reconciler.go:26] "Reconciler: start to sync state" May 17 00:21:55.952322 kubelet[3166]: I0517 00:21:55.952088 3166 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:21:55.957305 kubelet[3166]: I0517 00:21:55.956651 3166 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 17 00:21:55.960781 kubelet[3166]: I0517 00:21:55.946603 3166 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:21:55.960781 kubelet[3166]: I0517 00:21:55.957666 3166 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:21:55.960781 kubelet[3166]: I0517 00:21:55.959431 3166 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 17 00:21:55.960781 kubelet[3166]: I0517 00:21:55.959861 3166 status_manager.go:230] "Starting to sync pod status with apiserver" May 17 00:21:55.960781 kubelet[3166]: I0517 00:21:55.959887 3166 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:21:55.960781 kubelet[3166]: I0517 00:21:55.959917 3166 kubelet.go:2436] "Starting kubelet main sync loop" May 17 00:21:55.960781 kubelet[3166]: E0517 00:21:55.959972 3166 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:21:55.968208 kubelet[3166]: E0517 00:21:55.968143 3166 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:21:55.969318 kubelet[3166]: I0517 00:21:55.969273 3166 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:21:55.971846 kubelet[3166]: I0517 00:21:55.971807 3166 factory.go:223] Registration of the containerd container factory successfully May 17 00:21:55.971846 kubelet[3166]: I0517 00:21:55.971824 3166 factory.go:223] Registration of the systemd container factory successfully May 17 00:21:56.028892 kubelet[3166]: I0517 00:21:56.028816 3166 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:21:56.028892 kubelet[3166]: I0517 00:21:56.028882 3166 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:21:56.029105 kubelet[3166]: I0517 00:21:56.028906 3166 state_mem.go:36] "Initialized new in-memory state store" May 17 00:21:56.029151 kubelet[3166]: I0517 00:21:56.029129 3166 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:21:56.029215 kubelet[3166]: I0517 00:21:56.029143 3166 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:21:56.029215 kubelet[3166]: I0517 00:21:56.029169 3166 policy_none.go:49] "None policy: Start" May 17 00:21:56.029215 kubelet[3166]: I0517 00:21:56.029181 3166 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:21:56.029339 kubelet[3166]: I0517 00:21:56.029225 3166 state_mem.go:35] "Initializing new in-memory state store" May 17 00:21:56.029385 kubelet[3166]: I0517 00:21:56.029378 3166 state_mem.go:75] "Updated machine memory state" May 17 00:21:56.034163 kubelet[3166]: E0517 00:21:56.033242 3166 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 17 00:21:56.034163 kubelet[3166]: I0517 00:21:56.033399 3166 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:21:56.034163 kubelet[3166]: I0517 00:21:56.033411 3166 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:21:56.034163 kubelet[3166]: I0517 00:21:56.034002 3166 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:21:56.036911 kubelet[3166]: E0517 00:21:56.036873 3166 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:21:56.061033 kubelet[3166]: I0517 00:21:56.060973 3166 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:56.061510 kubelet[3166]: I0517 00:21:56.061486 3166 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:56.062106 kubelet[3166]: I0517 00:21:56.062049 3166 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:56.070041 kubelet[3166]: I0517 00:21:56.069978 3166 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 00:21:56.075309 kubelet[3166]: I0517 00:21:56.075158 3166 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 00:21:56.076207 kubelet[3166]: I0517 00:21:56.076093 3166 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 00:21:56.076207 kubelet[3166]: E0517 00:21:56.076158 3166 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-n-4e81e33f0f\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:56.142158 kubelet[3166]: I0517 00:21:56.142009 3166 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:56.154488 kubelet[3166]: I0517 00:21:56.154149 3166 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:56.154488 kubelet[3166]: I0517 00:21:56.154164 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65b1814beeb3d0ec7bdcea896bcc5e1e-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-4e81e33f0f\" (UID: \"65b1814beeb3d0ec7bdcea896bcc5e1e\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:56.154488 kubelet[3166]: I0517 00:21:56.154233 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/65b1814beeb3d0ec7bdcea896bcc5e1e-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-n-4e81e33f0f\" (UID: \"65b1814beeb3d0ec7bdcea896bcc5e1e\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:56.154488 kubelet[3166]: I0517 00:21:56.154257 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65b1814beeb3d0ec7bdcea896bcc5e1e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-n-4e81e33f0f\" (UID: \"65b1814beeb3d0ec7bdcea896bcc5e1e\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:56.154488 kubelet[3166]: I0517 00:21:56.154274 3166 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.3-n-4e81e33f0f" May 17 00:21:56.154488 kubelet[3166]: I0517 00:21:56.154283 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/379ddfc7d06dc0cf08475ab72deb3b94-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-n-4e81e33f0f\" (UID: \"379ddfc7d06dc0cf08475ab72deb3b94\") " pod="kube-system/kube-scheduler-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:56.154838 kubelet[3166]: I0517 00:21:56.154305 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07414b0a609b600d5e9b4404ff7d6ded-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-n-4e81e33f0f\" (UID: \"07414b0a609b600d5e9b4404ff7d6ded\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:56.154838 kubelet[3166]: I0517 00:21:56.154328 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07414b0a609b600d5e9b4404ff7d6ded-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-n-4e81e33f0f\" (UID: \"07414b0a609b600d5e9b4404ff7d6ded\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:56.154838 kubelet[3166]: I0517 00:21:56.154351 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07414b0a609b600d5e9b4404ff7d6ded-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-n-4e81e33f0f\" (UID: \"07414b0a609b600d5e9b4404ff7d6ded\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:56.154838 kubelet[3166]: I0517 00:21:56.154372 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65b1814beeb3d0ec7bdcea896bcc5e1e-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-4e81e33f0f\" (UID: \"65b1814beeb3d0ec7bdcea896bcc5e1e\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:56.154838 kubelet[3166]: I0517 00:21:56.154392 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/65b1814beeb3d0ec7bdcea896bcc5e1e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-n-4e81e33f0f\" (UID: \"65b1814beeb3d0ec7bdcea896bcc5e1e\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4e81e33f0f" May 17 00:21:56.915528 kubelet[3166]: I0517 00:21:56.915479 3166 apiserver.go:52] "Watching apiserver" May 17 00:21:56.953229 kubelet[3166]: I0517 00:21:56.953147 3166 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:21:57.062210 kubelet[3166]: I0517 00:21:57.061550 3166 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-4e81e33f0f" podStartSLOduration=1.061510928 podStartE2EDuration="1.061510928s" podCreationTimestamp="2025-05-17 00:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:21:57.060951622 +0000 UTC m=+1.221477759" watchObservedRunningTime="2025-05-17 00:21:57.061510928 +0000 UTC m=+1.222037065" May 17 00:21:57.062210 kubelet[3166]: I0517 00:21:57.062154 3166 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-n-4e81e33f0f" podStartSLOduration=1.062142034 podStartE2EDuration="1.062142034s" podCreationTimestamp="2025-05-17 00:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:21:57.035422672 +0000 UTC m=+1.195948709" watchObservedRunningTime="2025-05-17 00:21:57.062142034 +0000 UTC m=+1.222668171" May 17 00:21:57.092040 kubelet[3166]: I0517 00:21:57.091976 3166 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-n-4e81e33f0f" podStartSLOduration=4.091961426 podStartE2EDuration="4.091961426s" podCreationTimestamp="2025-05-17 00:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:21:57.077457484 +0000 UTC m=+1.237983521" watchObservedRunningTime="2025-05-17 00:21:57.091961426 +0000 UTC m=+1.252487463" May 17 00:21:59.268099 kubelet[3166]: I0517 00:21:59.268055 3166 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:21:59.268608 containerd[1701]: time="2025-05-17T00:21:59.268468234Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:21:59.268930 kubelet[3166]: I0517 00:21:59.268730 3166 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:22:00.348112 systemd[1]: Created slice kubepods-besteffort-pod96705442_a9cb_4ea9_b404_26baf4823f87.slice - libcontainer container kubepods-besteffort-pod96705442_a9cb_4ea9_b404_26baf4823f87.slice. May 17 00:22:00.364820 systemd[1]: Created slice kubepods-besteffort-poda4f9383e_db49_48a5_82ea_0742b9f0ac87.slice - libcontainer container kubepods-besteffort-poda4f9383e_db49_48a5_82ea_0742b9f0ac87.slice. May 17 00:22:00.383279 kubelet[3166]: I0517 00:22:00.382810 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a4f9383e-db49-48a5-82ea-0742b9f0ac87-kube-proxy\") pod \"kube-proxy-dkdqf\" (UID: \"a4f9383e-db49-48a5-82ea-0742b9f0ac87\") " pod="kube-system/kube-proxy-dkdqf" May 17 00:22:00.383279 kubelet[3166]: I0517 00:22:00.382864 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4f9383e-db49-48a5-82ea-0742b9f0ac87-xtables-lock\") pod \"kube-proxy-dkdqf\" (UID: \"a4f9383e-db49-48a5-82ea-0742b9f0ac87\") " pod="kube-system/kube-proxy-dkdqf" May 17 00:22:00.383279 kubelet[3166]: I0517 00:22:00.382895 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/96705442-a9cb-4ea9-b404-26baf4823f87-var-lib-calico\") pod \"tigera-operator-844669ff44-h2lzg\" (UID: \"96705442-a9cb-4ea9-b404-26baf4823f87\") " pod="tigera-operator/tigera-operator-844669ff44-h2lzg" May 17 00:22:00.383279 kubelet[3166]: I0517 00:22:00.382921 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb7cw\" (UniqueName: \"kubernetes.io/projected/96705442-a9cb-4ea9-b404-26baf4823f87-kube-api-access-mb7cw\") pod \"tigera-operator-844669ff44-h2lzg\" (UID: \"96705442-a9cb-4ea9-b404-26baf4823f87\") " pod="tigera-operator/tigera-operator-844669ff44-h2lzg" May 17 00:22:00.383279 kubelet[3166]: I0517 00:22:00.382942 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4f9383e-db49-48a5-82ea-0742b9f0ac87-lib-modules\") pod \"kube-proxy-dkdqf\" (UID: \"a4f9383e-db49-48a5-82ea-0742b9f0ac87\") " pod="kube-system/kube-proxy-dkdqf" May 17 00:22:00.383967 kubelet[3166]: I0517 00:22:00.382966 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvkl8\" (UniqueName: \"kubernetes.io/projected/a4f9383e-db49-48a5-82ea-0742b9f0ac87-kube-api-access-mvkl8\") pod \"kube-proxy-dkdqf\" (UID: \"a4f9383e-db49-48a5-82ea-0742b9f0ac87\") " pod="kube-system/kube-proxy-dkdqf" May 17 00:22:00.657392 containerd[1701]: time="2025-05-17T00:22:00.657134173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-h2lzg,Uid:96705442-a9cb-4ea9-b404-26baf4823f87,Namespace:tigera-operator,Attempt:0,}" May 17 00:22:00.671019 containerd[1701]: time="2025-05-17T00:22:00.670972684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dkdqf,Uid:a4f9383e-db49-48a5-82ea-0742b9f0ac87,Namespace:kube-system,Attempt:0,}" May 17 00:22:00.732212 containerd[1701]: time="2025-05-17T00:22:00.731416405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:00.732212 containerd[1701]: time="2025-05-17T00:22:00.731547307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:00.732212 containerd[1701]: time="2025-05-17T00:22:00.731582308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:00.732462 containerd[1701]: time="2025-05-17T00:22:00.732320119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:00.752916 containerd[1701]: time="2025-05-17T00:22:00.752542127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:00.752916 containerd[1701]: time="2025-05-17T00:22:00.752611228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:00.752916 containerd[1701]: time="2025-05-17T00:22:00.752632928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:00.752916 containerd[1701]: time="2025-05-17T00:22:00.752734730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:00.778410 systemd[1]: Started cri-containerd-138328690af81d5b7e5eefe4be77fb51a7d26292a26911a8564df45dc1aaf3b8.scope - libcontainer container 138328690af81d5b7e5eefe4be77fb51a7d26292a26911a8564df45dc1aaf3b8. May 17 00:22:00.783738 systemd[1]: Started cri-containerd-ea810b6e5c06b713b4ed1acef1f25b0b6ad68fc62e1e20263d80f665b2acef5c.scope - libcontainer container ea810b6e5c06b713b4ed1acef1f25b0b6ad68fc62e1e20263d80f665b2acef5c. May 17 00:22:00.817244 containerd[1701]: time="2025-05-17T00:22:00.817169211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dkdqf,Uid:a4f9383e-db49-48a5-82ea-0742b9f0ac87,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea810b6e5c06b713b4ed1acef1f25b0b6ad68fc62e1e20263d80f665b2acef5c\"" May 17 00:22:00.833959 containerd[1701]: time="2025-05-17T00:22:00.832686048Z" level=info msg="CreateContainer within sandbox \"ea810b6e5c06b713b4ed1acef1f25b0b6ad68fc62e1e20263d80f665b2acef5c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:22:00.862929 containerd[1701]: time="2025-05-17T00:22:00.862874908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-h2lzg,Uid:96705442-a9cb-4ea9-b404-26baf4823f87,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"138328690af81d5b7e5eefe4be77fb51a7d26292a26911a8564df45dc1aaf3b8\"" May 17 00:22:00.865555 containerd[1701]: time="2025-05-17T00:22:00.865386346Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:22:00.881314 containerd[1701]: time="2025-05-17T00:22:00.881267088Z" level=info msg="CreateContainer within sandbox \"ea810b6e5c06b713b4ed1acef1f25b0b6ad68fc62e1e20263d80f665b2acef5c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6b7354920e2d21edfb784bb86b20f800f631897ecdd01e7e9c1d0b5a228c6fdc\"" May 17 00:22:00.882399 containerd[1701]: time="2025-05-17T00:22:00.882339904Z" level=info msg="StartContainer for \"6b7354920e2d21edfb784bb86b20f800f631897ecdd01e7e9c1d0b5a228c6fdc\"" May 17 00:22:00.910376 systemd[1]: Started cri-containerd-6b7354920e2d21edfb784bb86b20f800f631897ecdd01e7e9c1d0b5a228c6fdc.scope - libcontainer container 6b7354920e2d21edfb784bb86b20f800f631897ecdd01e7e9c1d0b5a228c6fdc. May 17 00:22:00.944560 containerd[1701]: time="2025-05-17T00:22:00.944518551Z" level=info msg="StartContainer for \"6b7354920e2d21edfb784bb86b20f800f631897ecdd01e7e9c1d0b5a228c6fdc\" returns successfully" May 17 00:22:01.057162 kubelet[3166]: I0517 00:22:01.057092 3166 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dkdqf" podStartSLOduration=1.057065966 podStartE2EDuration="1.057065966s" podCreationTimestamp="2025-05-17 00:22:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:22:01.031878882 +0000 UTC m=+5.192404919" watchObservedRunningTime="2025-05-17 00:22:01.057065966 +0000 UTC m=+5.217592003" May 17 00:22:02.528858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2774489268.mount: Deactivated successfully. May 17 00:22:03.649915 containerd[1701]: time="2025-05-17T00:22:03.649859364Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:03.652846 containerd[1701]: time="2025-05-17T00:22:03.652776408Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 17 00:22:03.657677 containerd[1701]: time="2025-05-17T00:22:03.657618282Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:03.661639 containerd[1701]: time="2025-05-17T00:22:03.661582242Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:03.662895 containerd[1701]: time="2025-05-17T00:22:03.662325954Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 2.796880306s" May 17 00:22:03.662895 containerd[1701]: time="2025-05-17T00:22:03.662366554Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 00:22:03.669401 containerd[1701]: time="2025-05-17T00:22:03.669363261Z" level=info msg="CreateContainer within sandbox \"138328690af81d5b7e5eefe4be77fb51a7d26292a26911a8564df45dc1aaf3b8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:22:03.703868 containerd[1701]: time="2025-05-17T00:22:03.703809386Z" level=info msg="CreateContainer within sandbox \"138328690af81d5b7e5eefe4be77fb51a7d26292a26911a8564df45dc1aaf3b8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c93c21d8be69804c1e59d70b22b43bad76f1d2432e463054e5c7088eb2b5ba86\"" May 17 00:22:03.705499 containerd[1701]: time="2025-05-17T00:22:03.704438595Z" level=info msg="StartContainer for \"c93c21d8be69804c1e59d70b22b43bad76f1d2432e463054e5c7088eb2b5ba86\"" May 17 00:22:03.739749 systemd[1]: run-containerd-runc-k8s.io-c93c21d8be69804c1e59d70b22b43bad76f1d2432e463054e5c7088eb2b5ba86-runc.pvTeKh.mount: Deactivated successfully. May 17 00:22:03.748386 systemd[1]: Started cri-containerd-c93c21d8be69804c1e59d70b22b43bad76f1d2432e463054e5c7088eb2b5ba86.scope - libcontainer container c93c21d8be69804c1e59d70b22b43bad76f1d2432e463054e5c7088eb2b5ba86. May 17 00:22:03.778594 containerd[1701]: time="2025-05-17T00:22:03.778524724Z" level=info msg="StartContainer for \"c93c21d8be69804c1e59d70b22b43bad76f1d2432e463054e5c7088eb2b5ba86\" returns successfully" May 17 00:22:07.792223 kubelet[3166]: I0517 00:22:07.790425 3166 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-h2lzg" podStartSLOduration=4.991731333 podStartE2EDuration="7.790405367s" podCreationTimestamp="2025-05-17 00:22:00 +0000 UTC" firstStartedPulling="2025-05-17 00:22:00.864782937 +0000 UTC m=+5.025308974" lastFinishedPulling="2025-05-17 00:22:03.663456971 +0000 UTC m=+7.823983008" observedRunningTime="2025-05-17 00:22:04.030130857 +0000 UTC m=+8.190656894" watchObservedRunningTime="2025-05-17 00:22:07.790405367 +0000 UTC m=+11.950931404" May 17 00:22:10.082931 sudo[2316]: pam_unix(sudo:session): session closed for user root May 17 00:22:10.188453 sshd[2313]: pam_unix(sshd:session): session closed for user core May 17 00:22:10.192529 systemd-logind[1665]: Session 9 logged out. Waiting for processes to exit. May 17 00:22:10.194390 systemd[1]: sshd@6-10.200.8.41:22-10.200.16.10:37068.service: Deactivated successfully. May 17 00:22:10.199269 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:22:10.200438 systemd[1]: session-9.scope: Consumed 4.740s CPU time, 161.4M memory peak, 0B memory swap peak. May 17 00:22:10.203889 systemd-logind[1665]: Removed session 9. May 17 00:22:14.264928 systemd[1]: Created slice kubepods-besteffort-pod85fd6871_c963_431b_a332_6d641c410f71.slice - libcontainer container kubepods-besteffort-pod85fd6871_c963_431b_a332_6d641c410f71.slice. May 17 00:22:14.281314 kubelet[3166]: I0517 00:22:14.281270 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85fd6871-c963-431b-a332-6d641c410f71-tigera-ca-bundle\") pod \"calico-typha-798fd6667-mmj7l\" (UID: \"85fd6871-c963-431b-a332-6d641c410f71\") " pod="calico-system/calico-typha-798fd6667-mmj7l" May 17 00:22:14.281314 kubelet[3166]: I0517 00:22:14.281322 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/85fd6871-c963-431b-a332-6d641c410f71-typha-certs\") pod \"calico-typha-798fd6667-mmj7l\" (UID: \"85fd6871-c963-431b-a332-6d641c410f71\") " pod="calico-system/calico-typha-798fd6667-mmj7l" May 17 00:22:14.281824 kubelet[3166]: I0517 00:22:14.281344 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vksq6\" (UniqueName: \"kubernetes.io/projected/85fd6871-c963-431b-a332-6d641c410f71-kube-api-access-vksq6\") pod \"calico-typha-798fd6667-mmj7l\" (UID: \"85fd6871-c963-431b-a332-6d641c410f71\") " pod="calico-system/calico-typha-798fd6667-mmj7l" May 17 00:22:14.573579 containerd[1701]: time="2025-05-17T00:22:14.572671493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-798fd6667-mmj7l,Uid:85fd6871-c963-431b-a332-6d641c410f71,Namespace:calico-system,Attempt:0,}" May 17 00:22:14.645655 containerd[1701]: time="2025-05-17T00:22:14.645310717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:14.645655 containerd[1701]: time="2025-05-17T00:22:14.645463719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:14.646126 containerd[1701]: time="2025-05-17T00:22:14.645640221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:14.646787 containerd[1701]: time="2025-05-17T00:22:14.646735133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:14.674565 systemd[1]: Created slice kubepods-besteffort-pod1b21bc11_f267_4f20_a9e6_d64d69f2a7df.slice - libcontainer container kubepods-besteffort-pod1b21bc11_f267_4f20_a9e6_d64d69f2a7df.slice. May 17 00:22:14.684150 kubelet[3166]: I0517 00:22:14.684103 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1b21bc11-f267-4f20-a9e6-d64d69f2a7df-policysync\") pod \"calico-node-kd2n2\" (UID: \"1b21bc11-f267-4f20-a9e6-d64d69f2a7df\") " pod="calico-system/calico-node-kd2n2" May 17 00:22:14.684308 kubelet[3166]: I0517 00:22:14.684158 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1b21bc11-f267-4f20-a9e6-d64d69f2a7df-cni-log-dir\") pod \"calico-node-kd2n2\" (UID: \"1b21bc11-f267-4f20-a9e6-d64d69f2a7df\") " pod="calico-system/calico-node-kd2n2" May 17 00:22:14.684308 kubelet[3166]: I0517 00:22:14.684229 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1b21bc11-f267-4f20-a9e6-d64d69f2a7df-var-run-calico\") pod \"calico-node-kd2n2\" (UID: \"1b21bc11-f267-4f20-a9e6-d64d69f2a7df\") " pod="calico-system/calico-node-kd2n2" May 17 00:22:14.684308 kubelet[3166]: I0517 00:22:14.684257 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b21bc11-f267-4f20-a9e6-d64d69f2a7df-xtables-lock\") pod \"calico-node-kd2n2\" (UID: \"1b21bc11-f267-4f20-a9e6-d64d69f2a7df\") " pod="calico-system/calico-node-kd2n2" May 17 00:22:14.684308 kubelet[3166]: I0517 00:22:14.684280 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1b21bc11-f267-4f20-a9e6-d64d69f2a7df-cni-net-dir\") pod \"calico-node-kd2n2\" (UID: \"1b21bc11-f267-4f20-a9e6-d64d69f2a7df\") " pod="calico-system/calico-node-kd2n2" May 17 00:22:14.684308 kubelet[3166]: I0517 00:22:14.684299 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1b21bc11-f267-4f20-a9e6-d64d69f2a7df-flexvol-driver-host\") pod \"calico-node-kd2n2\" (UID: \"1b21bc11-f267-4f20-a9e6-d64d69f2a7df\") " pod="calico-system/calico-node-kd2n2" May 17 00:22:14.684518 kubelet[3166]: I0517 00:22:14.684319 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b21bc11-f267-4f20-a9e6-d64d69f2a7df-lib-modules\") pod \"calico-node-kd2n2\" (UID: \"1b21bc11-f267-4f20-a9e6-d64d69f2a7df\") " pod="calico-system/calico-node-kd2n2" May 17 00:22:14.684518 kubelet[3166]: I0517 00:22:14.684344 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1b21bc11-f267-4f20-a9e6-d64d69f2a7df-var-lib-calico\") pod \"calico-node-kd2n2\" (UID: \"1b21bc11-f267-4f20-a9e6-d64d69f2a7df\") " pod="calico-system/calico-node-kd2n2" May 17 00:22:14.684518 kubelet[3166]: I0517 00:22:14.684365 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhgzn\" (UniqueName: \"kubernetes.io/projected/1b21bc11-f267-4f20-a9e6-d64d69f2a7df-kube-api-access-vhgzn\") pod \"calico-node-kd2n2\" (UID: \"1b21bc11-f267-4f20-a9e6-d64d69f2a7df\") " pod="calico-system/calico-node-kd2n2" May 17 00:22:14.684518 kubelet[3166]: I0517 00:22:14.684389 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b21bc11-f267-4f20-a9e6-d64d69f2a7df-tigera-ca-bundle\") pod \"calico-node-kd2n2\" (UID: \"1b21bc11-f267-4f20-a9e6-d64d69f2a7df\") " pod="calico-system/calico-node-kd2n2" May 17 00:22:14.684518 kubelet[3166]: I0517 00:22:14.684414 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1b21bc11-f267-4f20-a9e6-d64d69f2a7df-cni-bin-dir\") pod \"calico-node-kd2n2\" (UID: \"1b21bc11-f267-4f20-a9e6-d64d69f2a7df\") " pod="calico-system/calico-node-kd2n2" May 17 00:22:14.684719 kubelet[3166]: I0517 00:22:14.684439 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1b21bc11-f267-4f20-a9e6-d64d69f2a7df-node-certs\") pod \"calico-node-kd2n2\" (UID: \"1b21bc11-f267-4f20-a9e6-d64d69f2a7df\") " pod="calico-system/calico-node-kd2n2" May 17 00:22:14.695403 systemd[1]: Started cri-containerd-f198157d37b71ee440f43ee323616cc9405c8957b26b92b8822330594db869a6.scope - libcontainer container f198157d37b71ee440f43ee323616cc9405c8957b26b92b8822330594db869a6. May 17 00:22:14.773909 containerd[1701]: time="2025-05-17T00:22:14.773837976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-798fd6667-mmj7l,Uid:85fd6871-c963-431b-a332-6d641c410f71,Namespace:calico-system,Attempt:0,} returns sandbox id \"f198157d37b71ee440f43ee323616cc9405c8957b26b92b8822330594db869a6\"" May 17 00:22:14.776383 containerd[1701]: time="2025-05-17T00:22:14.776077901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:22:14.788636 kubelet[3166]: E0517 00:22:14.788554 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.788636 kubelet[3166]: W0517 00:22:14.788580 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.788949 kubelet[3166]: E0517 00:22:14.788799 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.792082 kubelet[3166]: E0517 00:22:14.791994 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.792082 kubelet[3166]: W0517 00:22:14.792025 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.792082 kubelet[3166]: E0517 00:22:14.792045 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.795631 kubelet[3166]: E0517 00:22:14.795572 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.795631 kubelet[3166]: W0517 00:22:14.795586 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.795631 kubelet[3166]: E0517 00:22:14.795601 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.923652 kubelet[3166]: E0517 00:22:14.923296 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q5tzj" podUID="062cbe61-e0c7-468e-8be4-9dc29bebfa6f" May 17 00:22:14.969972 kubelet[3166]: E0517 00:22:14.969895 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.969972 kubelet[3166]: W0517 00:22:14.969930 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.970612 kubelet[3166]: E0517 00:22:14.970078 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.970800 kubelet[3166]: E0517 00:22:14.970775 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.970800 kubelet[3166]: W0517 00:22:14.970796 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.971537 kubelet[3166]: E0517 00:22:14.970815 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.971537 kubelet[3166]: E0517 00:22:14.971040 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.971537 kubelet[3166]: W0517 00:22:14.971056 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.971537 kubelet[3166]: E0517 00:22:14.971070 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.971537 kubelet[3166]: E0517 00:22:14.971352 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.971537 kubelet[3166]: W0517 00:22:14.971365 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.971537 kubelet[3166]: E0517 00:22:14.971379 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.972302 kubelet[3166]: E0517 00:22:14.971607 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.972302 kubelet[3166]: W0517 00:22:14.971619 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.972302 kubelet[3166]: E0517 00:22:14.971632 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.972302 kubelet[3166]: E0517 00:22:14.971831 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.972302 kubelet[3166]: W0517 00:22:14.971842 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.972302 kubelet[3166]: E0517 00:22:14.971855 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.972302 kubelet[3166]: E0517 00:22:14.972086 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.972302 kubelet[3166]: W0517 00:22:14.972097 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.972302 kubelet[3166]: E0517 00:22:14.972110 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.972685 kubelet[3166]: E0517 00:22:14.972342 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.972685 kubelet[3166]: W0517 00:22:14.972353 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.972685 kubelet[3166]: E0517 00:22:14.972367 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.972685 kubelet[3166]: E0517 00:22:14.972594 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.972685 kubelet[3166]: W0517 00:22:14.972604 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.972685 kubelet[3166]: E0517 00:22:14.972615 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.972928 kubelet[3166]: E0517 00:22:14.972830 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.972928 kubelet[3166]: W0517 00:22:14.972840 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.972928 kubelet[3166]: E0517 00:22:14.972851 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.973062 kubelet[3166]: E0517 00:22:14.973053 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.973112 kubelet[3166]: W0517 00:22:14.973062 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.973112 kubelet[3166]: E0517 00:22:14.973084 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.974629 kubelet[3166]: E0517 00:22:14.973296 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.974629 kubelet[3166]: W0517 00:22:14.973307 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.974629 kubelet[3166]: E0517 00:22:14.973319 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.974629 kubelet[3166]: E0517 00:22:14.973556 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.974629 kubelet[3166]: W0517 00:22:14.973568 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.974629 kubelet[3166]: E0517 00:22:14.973581 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.974629 kubelet[3166]: E0517 00:22:14.973826 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.974629 kubelet[3166]: W0517 00:22:14.973838 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.974629 kubelet[3166]: E0517 00:22:14.973851 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.974629 kubelet[3166]: E0517 00:22:14.974082 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.975640 kubelet[3166]: W0517 00:22:14.974093 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.975640 kubelet[3166]: E0517 00:22:14.974105 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.975640 kubelet[3166]: E0517 00:22:14.974352 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.975640 kubelet[3166]: W0517 00:22:14.974364 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.975640 kubelet[3166]: E0517 00:22:14.974377 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.975640 kubelet[3166]: E0517 00:22:14.974615 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.975640 kubelet[3166]: W0517 00:22:14.974627 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.975640 kubelet[3166]: E0517 00:22:14.974639 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.975640 kubelet[3166]: E0517 00:22:14.974862 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.975640 kubelet[3166]: W0517 00:22:14.974873 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.975888 kubelet[3166]: E0517 00:22:14.974885 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.975888 kubelet[3166]: E0517 00:22:14.975100 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.975888 kubelet[3166]: W0517 00:22:14.975111 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.975888 kubelet[3166]: E0517 00:22:14.975135 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.975888 kubelet[3166]: E0517 00:22:14.975417 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.975888 kubelet[3166]: W0517 00:22:14.975428 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.975888 kubelet[3166]: E0517 00:22:14.975441 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.980389 containerd[1701]: time="2025-05-17T00:22:14.980345320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kd2n2,Uid:1b21bc11-f267-4f20-a9e6-d64d69f2a7df,Namespace:calico-system,Attempt:0,}" May 17 00:22:14.987942 kubelet[3166]: E0517 00:22:14.987911 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.987942 kubelet[3166]: W0517 00:22:14.987936 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.989873 kubelet[3166]: E0517 00:22:14.987961 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.989873 kubelet[3166]: I0517 00:22:14.988001 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/062cbe61-e0c7-468e-8be4-9dc29bebfa6f-socket-dir\") pod \"csi-node-driver-q5tzj\" (UID: \"062cbe61-e0c7-468e-8be4-9dc29bebfa6f\") " pod="calico-system/csi-node-driver-q5tzj" May 17 00:22:14.989873 kubelet[3166]: E0517 00:22:14.988298 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.989873 kubelet[3166]: W0517 00:22:14.988315 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.989873 kubelet[3166]: E0517 00:22:14.988332 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.989873 kubelet[3166]: I0517 00:22:14.988463 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/062cbe61-e0c7-468e-8be4-9dc29bebfa6f-varrun\") pod \"csi-node-driver-q5tzj\" (UID: \"062cbe61-e0c7-468e-8be4-9dc29bebfa6f\") " pod="calico-system/csi-node-driver-q5tzj" May 17 00:22:14.989873 kubelet[3166]: E0517 00:22:14.988834 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.989873 kubelet[3166]: W0517 00:22:14.988850 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.989873 kubelet[3166]: E0517 00:22:14.988866 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.990322 kubelet[3166]: I0517 00:22:14.988913 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/062cbe61-e0c7-468e-8be4-9dc29bebfa6f-registration-dir\") pod \"csi-node-driver-q5tzj\" (UID: \"062cbe61-e0c7-468e-8be4-9dc29bebfa6f\") " pod="calico-system/csi-node-driver-q5tzj" May 17 00:22:14.991010 kubelet[3166]: E0517 00:22:14.990479 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.991010 kubelet[3166]: W0517 00:22:14.990497 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.991010 kubelet[3166]: E0517 00:22:14.990513 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.991010 kubelet[3166]: I0517 00:22:14.990550 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf4xt\" (UniqueName: \"kubernetes.io/projected/062cbe61-e0c7-468e-8be4-9dc29bebfa6f-kube-api-access-gf4xt\") pod \"csi-node-driver-q5tzj\" (UID: \"062cbe61-e0c7-468e-8be4-9dc29bebfa6f\") " pod="calico-system/csi-node-driver-q5tzj" May 17 00:22:14.991010 kubelet[3166]: E0517 00:22:14.990830 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.991010 kubelet[3166]: W0517 00:22:14.990844 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.991010 kubelet[3166]: E0517 00:22:14.990857 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.991010 kubelet[3166]: I0517 00:22:14.990958 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/062cbe61-e0c7-468e-8be4-9dc29bebfa6f-kubelet-dir\") pod \"csi-node-driver-q5tzj\" (UID: \"062cbe61-e0c7-468e-8be4-9dc29bebfa6f\") " pod="calico-system/csi-node-driver-q5tzj" May 17 00:22:14.991995 kubelet[3166]: E0517 00:22:14.991644 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.991995 kubelet[3166]: W0517 00:22:14.991662 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.991995 kubelet[3166]: E0517 00:22:14.991676 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.991995 kubelet[3166]: E0517 00:22:14.991952 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.991995 kubelet[3166]: W0517 00:22:14.991964 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.991995 kubelet[3166]: E0517 00:22:14.991978 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.992922 kubelet[3166]: E0517 00:22:14.992636 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.992922 kubelet[3166]: W0517 00:22:14.992653 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.992922 kubelet[3166]: E0517 00:22:14.992665 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.993103 kubelet[3166]: E0517 00:22:14.992978 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.993103 kubelet[3166]: W0517 00:22:14.992991 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.993103 kubelet[3166]: E0517 00:22:14.993005 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.994275 kubelet[3166]: E0517 00:22:14.994258 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.994392 kubelet[3166]: W0517 00:22:14.994370 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.994392 kubelet[3166]: E0517 00:22:14.994392 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.994717 kubelet[3166]: E0517 00:22:14.994641 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.994717 kubelet[3166]: W0517 00:22:14.994656 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.994717 kubelet[3166]: E0517 00:22:14.994669 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.995445 kubelet[3166]: E0517 00:22:14.995309 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.995445 kubelet[3166]: W0517 00:22:14.995325 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.995445 kubelet[3166]: E0517 00:22:14.995339 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.995982 kubelet[3166]: E0517 00:22:14.995878 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.995982 kubelet[3166]: W0517 00:22:14.995892 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.995982 kubelet[3166]: E0517 00:22:14.995907 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.996412 kubelet[3166]: E0517 00:22:14.996397 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.996627 kubelet[3166]: W0517 00:22:14.996515 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.996627 kubelet[3166]: E0517 00:22:14.996531 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:14.996964 kubelet[3166]: E0517 00:22:14.996894 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:14.996964 kubelet[3166]: W0517 00:22:14.996909 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:14.996964 kubelet[3166]: E0517 00:22:14.996923 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.038396 containerd[1701]: time="2025-05-17T00:22:15.038302178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:15.038396 containerd[1701]: time="2025-05-17T00:22:15.038350679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:15.038396 containerd[1701]: time="2025-05-17T00:22:15.038365279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:15.038876 containerd[1701]: time="2025-05-17T00:22:15.038687282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:15.065758 systemd[1]: Started cri-containerd-e096faf7bb8a887d3779e340cd79408e0d3f8be5e13435afe4215c49694a1575.scope - libcontainer container e096faf7bb8a887d3779e340cd79408e0d3f8be5e13435afe4215c49694a1575. May 17 00:22:15.091860 kubelet[3166]: E0517 00:22:15.091820 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.093638 kubelet[3166]: W0517 00:22:15.091855 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.093638 kubelet[3166]: E0517 00:22:15.092013 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.093638 kubelet[3166]: E0517 00:22:15.092469 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.093638 kubelet[3166]: W0517 00:22:15.092525 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.093638 kubelet[3166]: E0517 00:22:15.092545 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.093638 kubelet[3166]: E0517 00:22:15.092865 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.093638 kubelet[3166]: W0517 00:22:15.092878 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.093638 kubelet[3166]: E0517 00:22:15.092923 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.093638 kubelet[3166]: E0517 00:22:15.093253 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.093638 kubelet[3166]: W0517 00:22:15.093266 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.094221 kubelet[3166]: E0517 00:22:15.093297 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.094221 kubelet[3166]: E0517 00:22:15.093927 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.094221 kubelet[3166]: W0517 00:22:15.093939 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.094221 kubelet[3166]: E0517 00:22:15.093976 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.094397 kubelet[3166]: E0517 00:22:15.094272 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.094397 kubelet[3166]: W0517 00:22:15.094292 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.094397 kubelet[3166]: E0517 00:22:15.094306 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.094623 kubelet[3166]: E0517 00:22:15.094592 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.094623 kubelet[3166]: W0517 00:22:15.094610 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.094623 kubelet[3166]: E0517 00:22:15.094624 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.095119 kubelet[3166]: E0517 00:22:15.095075 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.095119 kubelet[3166]: W0517 00:22:15.095091 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.095119 kubelet[3166]: E0517 00:22:15.095105 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.096309 kubelet[3166]: E0517 00:22:15.095425 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.096309 kubelet[3166]: W0517 00:22:15.095437 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.096309 kubelet[3166]: E0517 00:22:15.095468 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.096309 kubelet[3166]: E0517 00:22:15.095743 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.096309 kubelet[3166]: W0517 00:22:15.095756 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.096309 kubelet[3166]: E0517 00:22:15.095772 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.096309 kubelet[3166]: E0517 00:22:15.096197 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.096309 kubelet[3166]: W0517 00:22:15.096218 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.096309 kubelet[3166]: E0517 00:22:15.096232 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.096804 kubelet[3166]: E0517 00:22:15.096712 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.096804 kubelet[3166]: W0517 00:22:15.096725 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.096804 kubelet[3166]: E0517 00:22:15.096738 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.097419 kubelet[3166]: E0517 00:22:15.097399 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.097419 kubelet[3166]: W0517 00:22:15.097417 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.097419 kubelet[3166]: E0517 00:22:15.097432 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.097785 kubelet[3166]: E0517 00:22:15.097676 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.097785 kubelet[3166]: W0517 00:22:15.097687 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.097785 kubelet[3166]: E0517 00:22:15.097700 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.098426 kubelet[3166]: E0517 00:22:15.098410 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.098540 kubelet[3166]: W0517 00:22:15.098427 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.098540 kubelet[3166]: E0517 00:22:15.098441 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.098837 kubelet[3166]: E0517 00:22:15.098817 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.098837 kubelet[3166]: W0517 00:22:15.098836 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.098941 kubelet[3166]: E0517 00:22:15.098850 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.099340 kubelet[3166]: E0517 00:22:15.099305 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.099340 kubelet[3166]: W0517 00:22:15.099323 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.099340 kubelet[3166]: E0517 00:22:15.099338 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.100002 kubelet[3166]: E0517 00:22:15.099716 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.100002 kubelet[3166]: W0517 00:22:15.099738 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.100002 kubelet[3166]: E0517 00:22:15.099752 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.100368 kubelet[3166]: E0517 00:22:15.100348 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.100368 kubelet[3166]: W0517 00:22:15.100368 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.100492 kubelet[3166]: E0517 00:22:15.100382 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.101121 kubelet[3166]: E0517 00:22:15.101085 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.101121 kubelet[3166]: W0517 00:22:15.101107 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.101121 kubelet[3166]: E0517 00:22:15.101121 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.101598 kubelet[3166]: E0517 00:22:15.101578 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.101598 kubelet[3166]: W0517 00:22:15.101597 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.101717 kubelet[3166]: E0517 00:22:15.101611 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.103498 kubelet[3166]: E0517 00:22:15.103471 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.103498 kubelet[3166]: W0517 00:22:15.103497 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.103657 kubelet[3166]: E0517 00:22:15.103511 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.103839 kubelet[3166]: E0517 00:22:15.103818 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.103839 kubelet[3166]: W0517 00:22:15.103838 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.103969 kubelet[3166]: E0517 00:22:15.103852 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.104170 kubelet[3166]: E0517 00:22:15.104149 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.104170 kubelet[3166]: W0517 00:22:15.104168 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.104170 kubelet[3166]: E0517 00:22:15.104181 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.104917 kubelet[3166]: E0517 00:22:15.104897 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.104917 kubelet[3166]: W0517 00:22:15.104916 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.105367 kubelet[3166]: E0517 00:22:15.105343 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.123686 kubelet[3166]: E0517 00:22:15.123628 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:15.123686 kubelet[3166]: W0517 00:22:15.123657 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:15.123686 kubelet[3166]: E0517 00:22:15.123681 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:15.142458 containerd[1701]: time="2025-05-17T00:22:15.142406660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kd2n2,Uid:1b21bc11-f267-4f20-a9e6-d64d69f2a7df,Namespace:calico-system,Attempt:0,} returns sandbox id \"e096faf7bb8a887d3779e340cd79408e0d3f8be5e13435afe4215c49694a1575\"" May 17 00:22:16.277989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2641852110.mount: Deactivated successfully. May 17 00:22:16.961276 kubelet[3166]: E0517 00:22:16.961209 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q5tzj" podUID="062cbe61-e0c7-468e-8be4-9dc29bebfa6f" May 17 00:22:17.190904 containerd[1701]: time="2025-05-17T00:22:17.190836787Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:17.192823 containerd[1701]: time="2025-05-17T00:22:17.192672716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 17 00:22:17.196852 containerd[1701]: time="2025-05-17T00:22:17.196513278Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:17.201321 containerd[1701]: time="2025-05-17T00:22:17.201278855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:17.201982 containerd[1701]: time="2025-05-17T00:22:17.201940866Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.425819964s" May 17 00:22:17.202076 containerd[1701]: time="2025-05-17T00:22:17.201986667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 00:22:17.205213 containerd[1701]: time="2025-05-17T00:22:17.204939315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:22:17.228480 containerd[1701]: time="2025-05-17T00:22:17.228443194Z" level=info msg="CreateContainer within sandbox \"f198157d37b71ee440f43ee323616cc9405c8957b26b92b8822330594db869a6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:22:17.265577 containerd[1701]: time="2025-05-17T00:22:17.265525894Z" level=info msg="CreateContainer within sandbox \"f198157d37b71ee440f43ee323616cc9405c8957b26b92b8822330594db869a6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7d2e324bfd0e9aebd4cfb62c37bcf720913f9f467a3c90416911ecb5f4527523\"" May 17 00:22:17.266329 containerd[1701]: time="2025-05-17T00:22:17.266163704Z" level=info msg="StartContainer for \"7d2e324bfd0e9aebd4cfb62c37bcf720913f9f467a3c90416911ecb5f4527523\"" May 17 00:22:17.305379 systemd[1]: Started cri-containerd-7d2e324bfd0e9aebd4cfb62c37bcf720913f9f467a3c90416911ecb5f4527523.scope - libcontainer container 7d2e324bfd0e9aebd4cfb62c37bcf720913f9f467a3c90416911ecb5f4527523. May 17 00:22:17.352441 containerd[1701]: time="2025-05-17T00:22:17.352296696Z" level=info msg="StartContainer for \"7d2e324bfd0e9aebd4cfb62c37bcf720913f9f467a3c90416911ecb5f4527523\" returns successfully" May 17 00:22:18.103534 kubelet[3166]: E0517 00:22:18.103493 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.103534 kubelet[3166]: W0517 00:22:18.103525 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.104106 kubelet[3166]: E0517 00:22:18.103552 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.104106 kubelet[3166]: E0517 00:22:18.103821 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.104106 kubelet[3166]: W0517 00:22:18.103835 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.104106 kubelet[3166]: E0517 00:22:18.103858 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.104106 kubelet[3166]: E0517 00:22:18.104078 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.104106 kubelet[3166]: W0517 00:22:18.104089 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.104106 kubelet[3166]: E0517 00:22:18.104104 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.104438 kubelet[3166]: E0517 00:22:18.104331 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.104438 kubelet[3166]: W0517 00:22:18.104343 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.104438 kubelet[3166]: E0517 00:22:18.104354 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.104591 kubelet[3166]: E0517 00:22:18.104558 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.104591 kubelet[3166]: W0517 00:22:18.104568 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.104591 kubelet[3166]: E0517 00:22:18.104580 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.104789 kubelet[3166]: E0517 00:22:18.104766 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.104789 kubelet[3166]: W0517 00:22:18.104783 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.104907 kubelet[3166]: E0517 00:22:18.104795 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.105002 kubelet[3166]: E0517 00:22:18.104987 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.105002 kubelet[3166]: W0517 00:22:18.104999 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.105122 kubelet[3166]: E0517 00:22:18.105012 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.105238 kubelet[3166]: E0517 00:22:18.105222 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.105238 kubelet[3166]: W0517 00:22:18.105235 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.105347 kubelet[3166]: E0517 00:22:18.105247 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.105501 kubelet[3166]: E0517 00:22:18.105482 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.105501 kubelet[3166]: W0517 00:22:18.105497 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.105663 kubelet[3166]: E0517 00:22:18.105510 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.105748 kubelet[3166]: E0517 00:22:18.105715 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.105748 kubelet[3166]: W0517 00:22:18.105726 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.105748 kubelet[3166]: E0517 00:22:18.105739 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.106018 kubelet[3166]: E0517 00:22:18.105956 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.106018 kubelet[3166]: W0517 00:22:18.105966 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.106018 kubelet[3166]: E0517 00:22:18.105998 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.106240 kubelet[3166]: E0517 00:22:18.106229 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.106240 kubelet[3166]: W0517 00:22:18.106240 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.106377 kubelet[3166]: E0517 00:22:18.106253 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.106476 kubelet[3166]: E0517 00:22:18.106456 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.106476 kubelet[3166]: W0517 00:22:18.106469 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.106566 kubelet[3166]: E0517 00:22:18.106484 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.106694 kubelet[3166]: E0517 00:22:18.106679 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.106694 kubelet[3166]: W0517 00:22:18.106692 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.106815 kubelet[3166]: E0517 00:22:18.106704 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.106913 kubelet[3166]: E0517 00:22:18.106899 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.106913 kubelet[3166]: W0517 00:22:18.106910 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.106988 kubelet[3166]: E0517 00:22:18.106923 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.121326 kubelet[3166]: E0517 00:22:18.121298 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.121326 kubelet[3166]: W0517 00:22:18.121319 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.121525 kubelet[3166]: E0517 00:22:18.121341 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.121642 kubelet[3166]: E0517 00:22:18.121622 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.121642 kubelet[3166]: W0517 00:22:18.121637 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.121749 kubelet[3166]: E0517 00:22:18.121655 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.121947 kubelet[3166]: E0517 00:22:18.121930 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.121947 kubelet[3166]: W0517 00:22:18.121944 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.122049 kubelet[3166]: E0517 00:22:18.121958 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.122264 kubelet[3166]: E0517 00:22:18.122248 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.122264 kubelet[3166]: W0517 00:22:18.122261 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.122440 kubelet[3166]: E0517 00:22:18.122275 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.122498 kubelet[3166]: E0517 00:22:18.122485 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.122563 kubelet[3166]: W0517 00:22:18.122496 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.122563 kubelet[3166]: E0517 00:22:18.122509 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.122713 kubelet[3166]: E0517 00:22:18.122698 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.122713 kubelet[3166]: W0517 00:22:18.122711 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.122851 kubelet[3166]: E0517 00:22:18.122724 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.122944 kubelet[3166]: E0517 00:22:18.122929 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.122944 kubelet[3166]: W0517 00:22:18.122942 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.123087 kubelet[3166]: E0517 00:22:18.122954 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.123173 kubelet[3166]: E0517 00:22:18.123156 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.123173 kubelet[3166]: W0517 00:22:18.123170 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.123301 kubelet[3166]: E0517 00:22:18.123183 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.123451 kubelet[3166]: E0517 00:22:18.123435 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.123451 kubelet[3166]: W0517 00:22:18.123448 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.123541 kubelet[3166]: E0517 00:22:18.123464 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.123838 kubelet[3166]: E0517 00:22:18.123818 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.123838 kubelet[3166]: W0517 00:22:18.123833 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.123973 kubelet[3166]: E0517 00:22:18.123846 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.124075 kubelet[3166]: E0517 00:22:18.124058 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.124075 kubelet[3166]: W0517 00:22:18.124070 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.124243 kubelet[3166]: E0517 00:22:18.124083 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.124335 kubelet[3166]: E0517 00:22:18.124320 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.124335 kubelet[3166]: W0517 00:22:18.124333 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.124499 kubelet[3166]: E0517 00:22:18.124346 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.124593 kubelet[3166]: E0517 00:22:18.124576 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.124593 kubelet[3166]: W0517 00:22:18.124590 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.124674 kubelet[3166]: E0517 00:22:18.124603 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.124869 kubelet[3166]: E0517 00:22:18.124851 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.124869 kubelet[3166]: W0517 00:22:18.124865 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.124980 kubelet[3166]: E0517 00:22:18.124878 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.125255 kubelet[3166]: E0517 00:22:18.125238 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.125255 kubelet[3166]: W0517 00:22:18.125251 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.125430 kubelet[3166]: E0517 00:22:18.125264 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.125489 kubelet[3166]: E0517 00:22:18.125474 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.125489 kubelet[3166]: W0517 00:22:18.125484 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.125614 kubelet[3166]: E0517 00:22:18.125497 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.125727 kubelet[3166]: E0517 00:22:18.125712 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.125727 kubelet[3166]: W0517 00:22:18.125725 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.125833 kubelet[3166]: E0517 00:22:18.125738 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.126693 kubelet[3166]: E0517 00:22:18.126660 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:18.126693 kubelet[3166]: W0517 00:22:18.126690 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:18.126815 kubelet[3166]: E0517 00:22:18.126704 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:18.209993 systemd[1]: run-containerd-runc-k8s.io-7d2e324bfd0e9aebd4cfb62c37bcf720913f9f467a3c90416911ecb5f4527523-runc.R4LYe6.mount: Deactivated successfully. May 17 00:22:18.960357 kubelet[3166]: E0517 00:22:18.960296 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q5tzj" podUID="062cbe61-e0c7-468e-8be4-9dc29bebfa6f" May 17 00:22:19.077110 kubelet[3166]: I0517 00:22:19.077056 3166 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:22:19.114624 kubelet[3166]: E0517 00:22:19.114212 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.114624 kubelet[3166]: W0517 00:22:19.114356 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.114624 kubelet[3166]: E0517 00:22:19.114428 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.115256 kubelet[3166]: E0517 00:22:19.114820 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.115256 kubelet[3166]: W0517 00:22:19.114855 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.115256 kubelet[3166]: E0517 00:22:19.114878 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.115256 kubelet[3166]: E0517 00:22:19.115123 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.115256 kubelet[3166]: W0517 00:22:19.115135 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.115256 kubelet[3166]: E0517 00:22:19.115172 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.115556 kubelet[3166]: E0517 00:22:19.115519 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.115632 kubelet[3166]: W0517 00:22:19.115556 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.115632 kubelet[3166]: E0517 00:22:19.115571 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.115911 kubelet[3166]: E0517 00:22:19.115891 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.115911 kubelet[3166]: W0517 00:22:19.115906 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.116072 kubelet[3166]: E0517 00:22:19.115920 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.116148 kubelet[3166]: E0517 00:22:19.116127 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.116148 kubelet[3166]: W0517 00:22:19.116144 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.116261 kubelet[3166]: E0517 00:22:19.116158 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.116388 kubelet[3166]: E0517 00:22:19.116372 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.116388 kubelet[3166]: W0517 00:22:19.116386 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.116506 kubelet[3166]: E0517 00:22:19.116398 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.116605 kubelet[3166]: E0517 00:22:19.116589 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.116605 kubelet[3166]: W0517 00:22:19.116602 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.116695 kubelet[3166]: E0517 00:22:19.116614 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.116933 kubelet[3166]: E0517 00:22:19.116915 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.116933 kubelet[3166]: W0517 00:22:19.116928 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.117073 kubelet[3166]: E0517 00:22:19.116942 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.117160 kubelet[3166]: E0517 00:22:19.117144 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.117160 kubelet[3166]: W0517 00:22:19.117157 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.117301 kubelet[3166]: E0517 00:22:19.117170 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.117397 kubelet[3166]: E0517 00:22:19.117381 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.117397 kubelet[3166]: W0517 00:22:19.117394 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.117513 kubelet[3166]: E0517 00:22:19.117407 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.117609 kubelet[3166]: E0517 00:22:19.117595 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.117609 kubelet[3166]: W0517 00:22:19.117607 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.117699 kubelet[3166]: E0517 00:22:19.117619 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.118019 kubelet[3166]: E0517 00:22:19.117901 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.118019 kubelet[3166]: W0517 00:22:19.117925 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.118019 kubelet[3166]: E0517 00:22:19.117936 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.118290 kubelet[3166]: E0517 00:22:19.118273 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.118290 kubelet[3166]: W0517 00:22:19.118286 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.118502 kubelet[3166]: E0517 00:22:19.118299 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.118550 kubelet[3166]: E0517 00:22:19.118509 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.118550 kubelet[3166]: W0517 00:22:19.118520 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.118550 kubelet[3166]: E0517 00:22:19.118532 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.131626 kubelet[3166]: E0517 00:22:19.131594 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.131626 kubelet[3166]: W0517 00:22:19.131617 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.131826 kubelet[3166]: E0517 00:22:19.131638 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.131954 kubelet[3166]: E0517 00:22:19.131937 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.131954 kubelet[3166]: W0517 00:22:19.131952 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.132067 kubelet[3166]: E0517 00:22:19.131967 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.132335 kubelet[3166]: E0517 00:22:19.132314 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.132335 kubelet[3166]: W0517 00:22:19.132329 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.132449 kubelet[3166]: E0517 00:22:19.132344 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.132658 kubelet[3166]: E0517 00:22:19.132636 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.132658 kubelet[3166]: W0517 00:22:19.132653 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.132799 kubelet[3166]: E0517 00:22:19.132670 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.132900 kubelet[3166]: E0517 00:22:19.132882 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.132900 kubelet[3166]: W0517 00:22:19.132895 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.133038 kubelet[3166]: E0517 00:22:19.132908 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.133111 kubelet[3166]: E0517 00:22:19.133102 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.133162 kubelet[3166]: W0517 00:22:19.133113 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.133162 kubelet[3166]: E0517 00:22:19.133126 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.133396 kubelet[3166]: E0517 00:22:19.133379 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.133396 kubelet[3166]: W0517 00:22:19.133393 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.133503 kubelet[3166]: E0517 00:22:19.133407 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.133654 kubelet[3166]: E0517 00:22:19.133638 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.133654 kubelet[3166]: W0517 00:22:19.133653 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.133768 kubelet[3166]: E0517 00:22:19.133667 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.133975 kubelet[3166]: E0517 00:22:19.133957 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.133975 kubelet[3166]: W0517 00:22:19.133972 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.134079 kubelet[3166]: E0517 00:22:19.133986 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.134531 kubelet[3166]: E0517 00:22:19.134396 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.134531 kubelet[3166]: W0517 00:22:19.134422 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.134531 kubelet[3166]: E0517 00:22:19.134435 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.134801 kubelet[3166]: E0517 00:22:19.134765 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.134801 kubelet[3166]: W0517 00:22:19.134779 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.134801 kubelet[3166]: E0517 00:22:19.134793 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.135139 kubelet[3166]: E0517 00:22:19.135123 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.135139 kubelet[3166]: W0517 00:22:19.135137 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.135357 kubelet[3166]: E0517 00:22:19.135151 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.135555 kubelet[3166]: E0517 00:22:19.135538 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.135766 kubelet[3166]: W0517 00:22:19.135634 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.135766 kubelet[3166]: E0517 00:22:19.135652 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.136004 kubelet[3166]: E0517 00:22:19.135988 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.136004 kubelet[3166]: W0517 00:22:19.136002 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.136114 kubelet[3166]: E0517 00:22:19.136017 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.136396 kubelet[3166]: E0517 00:22:19.136377 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.136396 kubelet[3166]: W0517 00:22:19.136391 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.136556 kubelet[3166]: E0517 00:22:19.136405 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.136641 kubelet[3166]: E0517 00:22:19.136626 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.136641 kubelet[3166]: W0517 00:22:19.136639 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.136745 kubelet[3166]: E0517 00:22:19.136652 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.136876 kubelet[3166]: E0517 00:22:19.136859 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.136876 kubelet[3166]: W0517 00:22:19.136872 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.136977 kubelet[3166]: E0517 00:22:19.136887 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.137292 kubelet[3166]: E0517 00:22:19.137275 3166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:19.137292 kubelet[3166]: W0517 00:22:19.137290 3166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:19.137385 kubelet[3166]: E0517 00:22:19.137303 3166 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:19.409315 containerd[1701]: time="2025-05-17T00:22:19.409260334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:19.412598 containerd[1701]: time="2025-05-17T00:22:19.412528087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 17 00:22:19.417635 containerd[1701]: time="2025-05-17T00:22:19.417573969Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:19.425784 containerd[1701]: time="2025-05-17T00:22:19.425715300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:19.426937 containerd[1701]: time="2025-05-17T00:22:19.426340410Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 2.221360995s" May 17 00:22:19.426937 containerd[1701]: time="2025-05-17T00:22:19.426387011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:22:19.432585 containerd[1701]: time="2025-05-17T00:22:19.432548710Z" level=info msg="CreateContainer within sandbox \"e096faf7bb8a887d3779e340cd79408e0d3f8be5e13435afe4215c49694a1575\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:22:19.473550 containerd[1701]: time="2025-05-17T00:22:19.473497772Z" level=info msg="CreateContainer within sandbox \"e096faf7bb8a887d3779e340cd79408e0d3f8be5e13435afe4215c49694a1575\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c78e6d4f966ff2dd11aab66251a53f008b89628cbc482f3d10fa019fb73f6871\"" May 17 00:22:19.474427 containerd[1701]: time="2025-05-17T00:22:19.474387887Z" level=info msg="StartContainer for \"c78e6d4f966ff2dd11aab66251a53f008b89628cbc482f3d10fa019fb73f6871\"" May 17 00:22:19.509831 systemd[1]: run-containerd-runc-k8s.io-c78e6d4f966ff2dd11aab66251a53f008b89628cbc482f3d10fa019fb73f6871-runc.CoSFVm.mount: Deactivated successfully. May 17 00:22:19.516404 systemd[1]: Started cri-containerd-c78e6d4f966ff2dd11aab66251a53f008b89628cbc482f3d10fa019fb73f6871.scope - libcontainer container c78e6d4f966ff2dd11aab66251a53f008b89628cbc482f3d10fa019fb73f6871. May 17 00:22:19.546045 containerd[1701]: time="2025-05-17T00:22:19.546003544Z" level=info msg="StartContainer for \"c78e6d4f966ff2dd11aab66251a53f008b89628cbc482f3d10fa019fb73f6871\" returns successfully" May 17 00:22:19.556022 systemd[1]: cri-containerd-c78e6d4f966ff2dd11aab66251a53f008b89628cbc482f3d10fa019fb73f6871.scope: Deactivated successfully. May 17 00:22:19.581688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c78e6d4f966ff2dd11aab66251a53f008b89628cbc482f3d10fa019fb73f6871-rootfs.mount: Deactivated successfully. May 17 00:22:20.107434 kubelet[3166]: I0517 00:22:20.106916 3166 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-798fd6667-mmj7l" podStartSLOduration=3.679446619 podStartE2EDuration="6.106895507s" podCreationTimestamp="2025-05-17 00:22:14 +0000 UTC" firstStartedPulling="2025-05-17 00:22:14.775529995 +0000 UTC m=+18.936056032" lastFinishedPulling="2025-05-17 00:22:17.202978883 +0000 UTC m=+21.363504920" observedRunningTime="2025-05-17 00:22:18.086990468 +0000 UTC m=+22.247516505" watchObservedRunningTime="2025-05-17 00:22:20.106895507 +0000 UTC m=+24.267421644" May 17 00:22:20.394776 kubelet[3166]: I0517 00:22:20.394281 3166 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:22:20.961231 kubelet[3166]: E0517 00:22:20.961168 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q5tzj" podUID="062cbe61-e0c7-468e-8be4-9dc29bebfa6f" May 17 00:22:21.086359 containerd[1701]: time="2025-05-17T00:22:21.086256550Z" level=info msg="shim disconnected" id=c78e6d4f966ff2dd11aab66251a53f008b89628cbc482f3d10fa019fb73f6871 namespace=k8s.io May 17 00:22:21.086359 containerd[1701]: time="2025-05-17T00:22:21.086341251Z" level=warning msg="cleaning up after shim disconnected" id=c78e6d4f966ff2dd11aab66251a53f008b89628cbc482f3d10fa019fb73f6871 namespace=k8s.io May 17 00:22:21.086359 containerd[1701]: time="2025-05-17T00:22:21.086356651Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:22:22.088986 containerd[1701]: time="2025-05-17T00:22:22.088625180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:22:22.960968 kubelet[3166]: E0517 00:22:22.960890 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q5tzj" podUID="062cbe61-e0c7-468e-8be4-9dc29bebfa6f" May 17 00:22:24.961016 kubelet[3166]: E0517 00:22:24.960730 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q5tzj" podUID="062cbe61-e0c7-468e-8be4-9dc29bebfa6f" May 17 00:22:25.319783 containerd[1701]: time="2025-05-17T00:22:25.319733270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:25.321627 containerd[1701]: time="2025-05-17T00:22:25.321561486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 17 00:22:25.324805 containerd[1701]: time="2025-05-17T00:22:25.324744812Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:25.330366 containerd[1701]: time="2025-05-17T00:22:25.330296258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:25.331132 containerd[1701]: time="2025-05-17T00:22:25.330978164Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 3.242257883s" May 17 00:22:25.331132 containerd[1701]: time="2025-05-17T00:22:25.331019364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:22:25.337847 containerd[1701]: time="2025-05-17T00:22:25.337807221Z" level=info msg="CreateContainer within sandbox \"e096faf7bb8a887d3779e340cd79408e0d3f8be5e13435afe4215c49694a1575\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:22:25.378001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3022636549.mount: Deactivated successfully. May 17 00:22:25.385903 containerd[1701]: time="2025-05-17T00:22:25.385856020Z" level=info msg="CreateContainer within sandbox \"e096faf7bb8a887d3779e340cd79408e0d3f8be5e13435afe4215c49694a1575\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8db77b02105b91811f00b66d53ef11ff0345288f07eb7e2127461f5440312924\"" May 17 00:22:25.388090 containerd[1701]: time="2025-05-17T00:22:25.386477326Z" level=info msg="StartContainer for \"8db77b02105b91811f00b66d53ef11ff0345288f07eb7e2127461f5440312924\"" May 17 00:22:25.421391 systemd[1]: Started cri-containerd-8db77b02105b91811f00b66d53ef11ff0345288f07eb7e2127461f5440312924.scope - libcontainer container 8db77b02105b91811f00b66d53ef11ff0345288f07eb7e2127461f5440312924. May 17 00:22:25.454041 containerd[1701]: time="2025-05-17T00:22:25.453987987Z" level=info msg="StartContainer for \"8db77b02105b91811f00b66d53ef11ff0345288f07eb7e2127461f5440312924\" returns successfully" May 17 00:22:26.961204 kubelet[3166]: E0517 00:22:26.960823 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q5tzj" podUID="062cbe61-e0c7-468e-8be4-9dc29bebfa6f" May 17 00:22:27.109664 containerd[1701]: time="2025-05-17T00:22:27.109587760Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:22:27.112441 systemd[1]: cri-containerd-8db77b02105b91811f00b66d53ef11ff0345288f07eb7e2127461f5440312924.scope: Deactivated successfully. May 17 00:22:27.137627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8db77b02105b91811f00b66d53ef11ff0345288f07eb7e2127461f5440312924-rootfs.mount: Deactivated successfully. May 17 00:22:27.181602 kubelet[3166]: I0517 00:22:27.181568 3166 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:22:28.344155 systemd[1]: Created slice kubepods-burstable-pod8e6bfc34_e1e7_4777_bb96_96b800f3b1da.slice - libcontainer container kubepods-burstable-pod8e6bfc34_e1e7_4777_bb96_96b800f3b1da.slice. May 17 00:22:28.354220 containerd[1701]: time="2025-05-17T00:22:28.353566308Z" level=info msg="shim disconnected" id=8db77b02105b91811f00b66d53ef11ff0345288f07eb7e2127461f5440312924 namespace=k8s.io May 17 00:22:28.354220 containerd[1701]: time="2025-05-17T00:22:28.353937911Z" level=warning msg="cleaning up after shim disconnected" id=8db77b02105b91811f00b66d53ef11ff0345288f07eb7e2127461f5440312924 namespace=k8s.io May 17 00:22:28.354220 containerd[1701]: time="2025-05-17T00:22:28.353992311Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:22:28.368684 systemd[1]: Created slice kubepods-besteffort-pod29b7b6b7_83d4_4805_ba4b_48795aa317b5.slice - libcontainer container kubepods-besteffort-pod29b7b6b7_83d4_4805_ba4b_48795aa317b5.slice. May 17 00:22:28.389171 systemd[1]: Created slice kubepods-besteffort-pod6b509eee_7c18_42a2_a9a2_d0bc267486dd.slice - libcontainer container kubepods-besteffort-pod6b509eee_7c18_42a2_a9a2_d0bc267486dd.slice. May 17 00:22:28.399625 kubelet[3166]: I0517 00:22:28.399386 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7fb9aa5a-970a-4e82-b193-535f4a3ef021-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-z6qms\" (UID: \"7fb9aa5a-970a-4e82-b193-535f4a3ef021\") " pod="calico-system/goldmane-78d55f7ddc-z6qms" May 17 00:22:28.399625 kubelet[3166]: I0517 00:22:28.399436 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/29b7b6b7-83d4-4805-ba4b-48795aa317b5-whisker-backend-key-pair\") pod \"whisker-cfb5fd58c-rrrg6\" (UID: \"29b7b6b7-83d4-4805-ba4b-48795aa317b5\") " pod="calico-system/whisker-cfb5fd58c-rrrg6" May 17 00:22:28.399625 kubelet[3166]: I0517 00:22:28.399459 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fb9aa5a-970a-4e82-b193-535f4a3ef021-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-z6qms\" (UID: \"7fb9aa5a-970a-4e82-b193-535f4a3ef021\") " pod="calico-system/goldmane-78d55f7ddc-z6qms" May 17 00:22:28.399625 kubelet[3166]: I0517 00:22:28.399487 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29b7b6b7-83d4-4805-ba4b-48795aa317b5-whisker-ca-bundle\") pod \"whisker-cfb5fd58c-rrrg6\" (UID: \"29b7b6b7-83d4-4805-ba4b-48795aa317b5\") " pod="calico-system/whisker-cfb5fd58c-rrrg6" May 17 00:22:28.399625 kubelet[3166]: I0517 00:22:28.399513 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn4dx\" (UniqueName: \"kubernetes.io/projected/29b7b6b7-83d4-4805-ba4b-48795aa317b5-kube-api-access-bn4dx\") pod \"whisker-cfb5fd58c-rrrg6\" (UID: \"29b7b6b7-83d4-4805-ba4b-48795aa317b5\") " pod="calico-system/whisker-cfb5fd58c-rrrg6" May 17 00:22:28.400254 kubelet[3166]: I0517 00:22:28.399532 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e6bfc34-e1e7-4777-bb96-96b800f3b1da-config-volume\") pod \"coredns-674b8bbfcf-9pgd9\" (UID: \"8e6bfc34-e1e7-4777-bb96-96b800f3b1da\") " pod="kube-system/coredns-674b8bbfcf-9pgd9" May 17 00:22:28.400254 kubelet[3166]: I0517 00:22:28.399558 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wntm\" (UniqueName: \"kubernetes.io/projected/8e6bfc34-e1e7-4777-bb96-96b800f3b1da-kube-api-access-6wntm\") pod \"coredns-674b8bbfcf-9pgd9\" (UID: \"8e6bfc34-e1e7-4777-bb96-96b800f3b1da\") " pod="kube-system/coredns-674b8bbfcf-9pgd9" May 17 00:22:28.400254 kubelet[3166]: I0517 00:22:28.399579 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l2l6\" (UniqueName: \"kubernetes.io/projected/7fb9aa5a-970a-4e82-b193-535f4a3ef021-kube-api-access-4l2l6\") pod \"goldmane-78d55f7ddc-z6qms\" (UID: \"7fb9aa5a-970a-4e82-b193-535f4a3ef021\") " pod="calico-system/goldmane-78d55f7ddc-z6qms" May 17 00:22:28.400254 kubelet[3166]: I0517 00:22:28.399603 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6b509eee-7c18-42a2-a9a2-d0bc267486dd-calico-apiserver-certs\") pod \"calico-apiserver-c46c46bf5-9vfk6\" (UID: \"6b509eee-7c18-42a2-a9a2-d0bc267486dd\") " pod="calico-apiserver/calico-apiserver-c46c46bf5-9vfk6" May 17 00:22:28.400254 kubelet[3166]: I0517 00:22:28.399627 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlz9s\" (UniqueName: \"kubernetes.io/projected/6b509eee-7c18-42a2-a9a2-d0bc267486dd-kube-api-access-mlz9s\") pod \"calico-apiserver-c46c46bf5-9vfk6\" (UID: \"6b509eee-7c18-42a2-a9a2-d0bc267486dd\") " pod="calico-apiserver/calico-apiserver-c46c46bf5-9vfk6" May 17 00:22:28.400479 kubelet[3166]: I0517 00:22:28.399649 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fb9aa5a-970a-4e82-b193-535f4a3ef021-config\") pod \"goldmane-78d55f7ddc-z6qms\" (UID: \"7fb9aa5a-970a-4e82-b193-535f4a3ef021\") " pod="calico-system/goldmane-78d55f7ddc-z6qms" May 17 00:22:28.404857 systemd[1]: Created slice kubepods-besteffort-pod7fb9aa5a_970a_4e82_b193_535f4a3ef021.slice - libcontainer container kubepods-besteffort-pod7fb9aa5a_970a_4e82_b193_535f4a3ef021.slice. May 17 00:22:28.413210 systemd[1]: Created slice kubepods-burstable-poda9086985_91bc_4199_9a6a_aa40312930e7.slice - libcontainer container kubepods-burstable-poda9086985_91bc_4199_9a6a_aa40312930e7.slice. May 17 00:22:28.423409 systemd[1]: Created slice kubepods-besteffort-pod13021eae_c635_4ef1_8524_560d7fa13a2f.slice - libcontainer container kubepods-besteffort-pod13021eae_c635_4ef1_8524_560d7fa13a2f.slice. May 17 00:22:28.431639 systemd[1]: Created slice kubepods-besteffort-podd30fdcd3_ada9_4b49_b2d5_3ba22cf562c8.slice - libcontainer container kubepods-besteffort-podd30fdcd3_ada9_4b49_b2d5_3ba22cf562c8.slice. May 17 00:22:28.436845 systemd[1]: Created slice kubepods-besteffort-pod062cbe61_e0c7_468e_8be4_9dc29bebfa6f.slice - libcontainer container kubepods-besteffort-pod062cbe61_e0c7_468e_8be4_9dc29bebfa6f.slice. May 17 00:22:28.439910 containerd[1701]: time="2025-05-17T00:22:28.439859126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q5tzj,Uid:062cbe61-e0c7-468e-8be4-9dc29bebfa6f,Namespace:calico-system,Attempt:0,}" May 17 00:22:28.505221 kubelet[3166]: I0517 00:22:28.503165 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d30fdcd3-ada9-4b49-b2d5-3ba22cf562c8-calico-apiserver-certs\") pod \"calico-apiserver-c46c46bf5-sgprk\" (UID: \"d30fdcd3-ada9-4b49-b2d5-3ba22cf562c8\") " pod="calico-apiserver/calico-apiserver-c46c46bf5-sgprk" May 17 00:22:28.505221 kubelet[3166]: I0517 00:22:28.504609 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl5rt\" (UniqueName: \"kubernetes.io/projected/d30fdcd3-ada9-4b49-b2d5-3ba22cf562c8-kube-api-access-hl5rt\") pod \"calico-apiserver-c46c46bf5-sgprk\" (UID: \"d30fdcd3-ada9-4b49-b2d5-3ba22cf562c8\") " pod="calico-apiserver/calico-apiserver-c46c46bf5-sgprk" May 17 00:22:28.505221 kubelet[3166]: I0517 00:22:28.504675 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13021eae-c635-4ef1-8524-560d7fa13a2f-tigera-ca-bundle\") pod \"calico-kube-controllers-5d55675d8-rdf7k\" (UID: \"13021eae-c635-4ef1-8524-560d7fa13a2f\") " pod="calico-system/calico-kube-controllers-5d55675d8-rdf7k" May 17 00:22:28.505221 kubelet[3166]: I0517 00:22:28.504741 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p47cq\" (UniqueName: \"kubernetes.io/projected/13021eae-c635-4ef1-8524-560d7fa13a2f-kube-api-access-p47cq\") pod \"calico-kube-controllers-5d55675d8-rdf7k\" (UID: \"13021eae-c635-4ef1-8524-560d7fa13a2f\") " pod="calico-system/calico-kube-controllers-5d55675d8-rdf7k" May 17 00:22:28.505221 kubelet[3166]: I0517 00:22:28.504771 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf2tz\" (UniqueName: \"kubernetes.io/projected/a9086985-91bc-4199-9a6a-aa40312930e7-kube-api-access-qf2tz\") pod \"coredns-674b8bbfcf-6jwzq\" (UID: \"a9086985-91bc-4199-9a6a-aa40312930e7\") " pod="kube-system/coredns-674b8bbfcf-6jwzq" May 17 00:22:28.505566 kubelet[3166]: I0517 00:22:28.504809 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9086985-91bc-4199-9a6a-aa40312930e7-config-volume\") pod \"coredns-674b8bbfcf-6jwzq\" (UID: \"a9086985-91bc-4199-9a6a-aa40312930e7\") " pod="kube-system/coredns-674b8bbfcf-6jwzq" May 17 00:22:28.594590 containerd[1701]: time="2025-05-17T00:22:28.594442312Z" level=error msg="Failed to destroy network for sandbox \"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:28.595547 containerd[1701]: time="2025-05-17T00:22:28.594871715Z" level=error msg="encountered an error cleaning up failed sandbox \"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:28.595547 containerd[1701]: time="2025-05-17T00:22:28.594962416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q5tzj,Uid:062cbe61-e0c7-468e-8be4-9dc29bebfa6f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:28.595711 kubelet[3166]: E0517 00:22:28.595610 3166 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:28.595711 kubelet[3166]: E0517 00:22:28.595696 3166 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q5tzj" May 17 00:22:28.595808 kubelet[3166]: E0517 00:22:28.595730 3166 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q5tzj" May 17 00:22:28.595849 kubelet[3166]: E0517 00:22:28.595795 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q5tzj_calico-system(062cbe61-e0c7-468e-8be4-9dc29bebfa6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q5tzj_calico-system(062cbe61-e0c7-468e-8be4-9dc29bebfa6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q5tzj" podUID="062cbe61-e0c7-468e-8be4-9dc29bebfa6f" May 17 00:22:28.649327 containerd[1701]: time="2025-05-17T00:22:28.648823964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9pgd9,Uid:8e6bfc34-e1e7-4777-bb96-96b800f3b1da,Namespace:kube-system,Attempt:0,}" May 17 00:22:28.680619 containerd[1701]: time="2025-05-17T00:22:28.680551828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cfb5fd58c-rrrg6,Uid:29b7b6b7-83d4-4805-ba4b-48795aa317b5,Namespace:calico-system,Attempt:0,}" May 17 00:22:28.698026 containerd[1701]: time="2025-05-17T00:22:28.697723971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c46c46bf5-9vfk6,Uid:6b509eee-7c18-42a2-a9a2-d0bc267486dd,Namespace:calico-apiserver,Attempt:0,}" May 17 00:22:28.711210 containerd[1701]: time="2025-05-17T00:22:28.710791480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-z6qms,Uid:7fb9aa5a-970a-4e82-b193-535f4a3ef021,Namespace:calico-system,Attempt:0,}" May 17 00:22:28.717549 containerd[1701]: time="2025-05-17T00:22:28.717501735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6jwzq,Uid:a9086985-91bc-4199-9a6a-aa40312930e7,Namespace:kube-system,Attempt:0,}" May 17 00:22:28.741632 containerd[1701]: time="2025-05-17T00:22:28.741018331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c46c46bf5-sgprk,Uid:d30fdcd3-ada9-4b49-b2d5-3ba22cf562c8,Namespace:calico-apiserver,Attempt:0,}" May 17 00:22:28.741632 containerd[1701]: time="2025-05-17T00:22:28.741378134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d55675d8-rdf7k,Uid:13021eae-c635-4ef1-8524-560d7fa13a2f,Namespace:calico-system,Attempt:0,}" May 17 00:22:28.765581 containerd[1701]: time="2025-05-17T00:22:28.765521035Z" level=error msg="Failed to destroy network for sandbox \"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:28.765907 containerd[1701]: time="2025-05-17T00:22:28.765873938Z" level=error msg="encountered an error cleaning up failed sandbox \"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:28.766006 containerd[1701]: time="2025-05-17T00:22:28.765942638Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9pgd9,Uid:8e6bfc34-e1e7-4777-bb96-96b800f3b1da,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:28.766265 kubelet[3166]: E0517 00:22:28.766220 3166 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:28.766405 kubelet[3166]: E0517 00:22:28.766295 3166 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9pgd9" May 17 00:22:28.766405 kubelet[3166]: E0517 00:22:28.766329 3166 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9pgd9" May 17 00:22:28.767101 kubelet[3166]: E0517 00:22:28.766400 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-9pgd9_kube-system(8e6bfc34-e1e7-4777-bb96-96b800f3b1da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-9pgd9_kube-system(8e6bfc34-e1e7-4777-bb96-96b800f3b1da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-9pgd9" podUID="8e6bfc34-e1e7-4777-bb96-96b800f3b1da" May 17 00:22:28.857286 containerd[1701]: time="2025-05-17T00:22:28.857058096Z" level=error msg="Failed to destroy network for sandbox \"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:28.858214 containerd[1701]: time="2025-05-17T00:22:28.857486600Z" level=error msg="encountered an error cleaning up failed sandbox \"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:28.858214 containerd[1701]: time="2025-05-17T00:22:28.857553500Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cfb5fd58c-rrrg6,Uid:29b7b6b7-83d4-4805-ba4b-48795aa317b5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:28.859217 kubelet[3166]: E0517 00:22:28.858339 3166 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:28.859217 kubelet[3166]: E0517 00:22:28.858410 3166 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-cfb5fd58c-rrrg6" May 17 00:22:28.859217 kubelet[3166]: E0517 00:22:28.858437 3166 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-cfb5fd58c-rrrg6" May 17 00:22:28.859828 kubelet[3166]: E0517 00:22:28.858546 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-cfb5fd58c-rrrg6_calico-system(29b7b6b7-83d4-4805-ba4b-48795aa317b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-cfb5fd58c-rrrg6_calico-system(29b7b6b7-83d4-4805-ba4b-48795aa317b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-cfb5fd58c-rrrg6" podUID="29b7b6b7-83d4-4805-ba4b-48795aa317b5" May 17 00:22:28.894294 containerd[1701]: time="2025-05-17T00:22:28.894235206Z" level=error msg="Failed to destroy network for sandbox \"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:28.896217 containerd[1701]: time="2025-05-17T00:22:28.895282114Z" level=error msg="encountered an error cleaning up failed sandbox \"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:28.896217 containerd[1701]: time="2025-05-17T00:22:28.895366315Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6jwzq,Uid:a9086985-91bc-4199-9a6a-aa40312930e7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:28.896397 kubelet[3166]: E0517 00:22:28.895872 3166 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:28.896397 kubelet[3166]: E0517 00:22:28.895951 3166 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6jwzq" May 17 00:22:28.896397 kubelet[3166]: E0517 00:22:28.895980 3166 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6jwzq" May 17 00:22:28.897702 kubelet[3166]: E0517 00:22:28.897593 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6jwzq_kube-system(a9086985-91bc-4199-9a6a-aa40312930e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6jwzq_kube-system(a9086985-91bc-4199-9a6a-aa40312930e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6jwzq" podUID="a9086985-91bc-4199-9a6a-aa40312930e7" May 17 00:22:29.044315 containerd[1701]: time="2025-05-17T00:22:29.044255254Z" level=error msg="Failed to destroy network for sandbox \"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.045218 containerd[1701]: time="2025-05-17T00:22:29.045156561Z" level=error msg="encountered an error cleaning up failed sandbox \"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.047316 containerd[1701]: time="2025-05-17T00:22:29.047267879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c46c46bf5-9vfk6,Uid:6b509eee-7c18-42a2-a9a2-d0bc267486dd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.047593 kubelet[3166]: E0517 00:22:29.047547 3166 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.047733 kubelet[3166]: E0517 00:22:29.047627 3166 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c46c46bf5-9vfk6" May 17 00:22:29.047733 kubelet[3166]: E0517 00:22:29.047662 3166 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c46c46bf5-9vfk6" May 17 00:22:29.047830 kubelet[3166]: E0517 00:22:29.047724 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c46c46bf5-9vfk6_calico-apiserver(6b509eee-7c18-42a2-a9a2-d0bc267486dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c46c46bf5-9vfk6_calico-apiserver(6b509eee-7c18-42a2-a9a2-d0bc267486dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c46c46bf5-9vfk6" podUID="6b509eee-7c18-42a2-a9a2-d0bc267486dd" May 17 00:22:29.054268 containerd[1701]: time="2025-05-17T00:22:29.054183736Z" level=error msg="Failed to destroy network for sandbox \"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.055253 containerd[1701]: time="2025-05-17T00:22:29.054607740Z" level=error msg="encountered an error cleaning up failed sandbox \"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.055253 containerd[1701]: time="2025-05-17T00:22:29.054691040Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c46c46bf5-sgprk,Uid:d30fdcd3-ada9-4b49-b2d5-3ba22cf562c8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.055441 kubelet[3166]: E0517 00:22:29.054947 3166 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.055441 kubelet[3166]: E0517 00:22:29.055008 3166 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c46c46bf5-sgprk" May 17 00:22:29.055441 kubelet[3166]: E0517 00:22:29.055041 3166 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c46c46bf5-sgprk" May 17 00:22:29.055585 kubelet[3166]: E0517 00:22:29.055097 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c46c46bf5-sgprk_calico-apiserver(d30fdcd3-ada9-4b49-b2d5-3ba22cf562c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c46c46bf5-sgprk_calico-apiserver(d30fdcd3-ada9-4b49-b2d5-3ba22cf562c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c46c46bf5-sgprk" podUID="d30fdcd3-ada9-4b49-b2d5-3ba22cf562c8" May 17 00:22:29.060529 containerd[1701]: time="2025-05-17T00:22:29.060480189Z" level=error msg="Failed to destroy network for sandbox \"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.061465 containerd[1701]: time="2025-05-17T00:22:29.061327296Z" level=error msg="encountered an error cleaning up failed sandbox \"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.061465 containerd[1701]: time="2025-05-17T00:22:29.061404896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-z6qms,Uid:7fb9aa5a-970a-4e82-b193-535f4a3ef021,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.062055 kubelet[3166]: E0517 00:22:29.061805 3166 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.062257 kubelet[3166]: E0517 00:22:29.062158 3166 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-z6qms" May 17 00:22:29.062257 kubelet[3166]: E0517 00:22:29.062205 3166 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-z6qms" May 17 00:22:29.062473 kubelet[3166]: E0517 00:22:29.062312 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-z6qms_calico-system(7fb9aa5a-970a-4e82-b193-535f4a3ef021)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-z6qms_calico-system(7fb9aa5a-970a-4e82-b193-535f4a3ef021)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-z6qms" podUID="7fb9aa5a-970a-4e82-b193-535f4a3ef021" May 17 00:22:29.062746 containerd[1701]: time="2025-05-17T00:22:29.062714007Z" level=error msg="Failed to destroy network for sandbox \"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.063383 containerd[1701]: time="2025-05-17T00:22:29.063326112Z" level=error msg="encountered an error cleaning up failed sandbox \"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.063588 containerd[1701]: time="2025-05-17T00:22:29.063544514Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d55675d8-rdf7k,Uid:13021eae-c635-4ef1-8524-560d7fa13a2f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.063976 kubelet[3166]: E0517 00:22:29.063941 3166 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.064081 kubelet[3166]: E0517 00:22:29.063997 3166 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d55675d8-rdf7k" May 17 00:22:29.064081 kubelet[3166]: E0517 00:22:29.064023 3166 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d55675d8-rdf7k" May 17 00:22:29.064239 kubelet[3166]: E0517 00:22:29.064081 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d55675d8-rdf7k_calico-system(13021eae-c635-4ef1-8524-560d7fa13a2f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d55675d8-rdf7k_calico-system(13021eae-c635-4ef1-8524-560d7fa13a2f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d55675d8-rdf7k" podUID="13021eae-c635-4ef1-8524-560d7fa13a2f" May 17 00:22:29.108856 kubelet[3166]: I0517 00:22:29.108082 3166 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" May 17 00:22:29.113007 kubelet[3166]: I0517 00:22:29.112975 3166 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" May 17 00:22:29.114342 containerd[1701]: time="2025-05-17T00:22:29.113234627Z" level=info msg="StopPodSandbox for \"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\"" May 17 00:22:29.114827 containerd[1701]: time="2025-05-17T00:22:29.114648339Z" level=info msg="Ensure that sandbox a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05 in task-service has been cleanup successfully" May 17 00:22:29.115412 containerd[1701]: time="2025-05-17T00:22:29.114808440Z" level=info msg="StopPodSandbox for \"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\"" May 17 00:22:29.115938 containerd[1701]: time="2025-05-17T00:22:29.115871549Z" level=info msg="Ensure that sandbox bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d in task-service has been cleanup successfully" May 17 00:22:29.121517 kubelet[3166]: I0517 00:22:29.121483 3166 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" May 17 00:22:29.122550 containerd[1701]: time="2025-05-17T00:22:29.122413204Z" level=info msg="StopPodSandbox for \"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\"" May 17 00:22:29.126881 containerd[1701]: time="2025-05-17T00:22:29.126346636Z" level=info msg="Ensure that sandbox 9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0 in task-service has been cleanup successfully" May 17 00:22:29.132838 kubelet[3166]: I0517 00:22:29.132106 3166 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" May 17 00:22:29.134438 containerd[1701]: time="2025-05-17T00:22:29.134387803Z" level=info msg="StopPodSandbox for \"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\"" May 17 00:22:29.135682 containerd[1701]: time="2025-05-17T00:22:29.135042909Z" level=info msg="Ensure that sandbox acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6 in task-service has been cleanup successfully" May 17 00:22:29.145020 containerd[1701]: time="2025-05-17T00:22:29.144788690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:22:29.147969 kubelet[3166]: I0517 00:22:29.147930 3166 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" May 17 00:22:29.149061 containerd[1701]: time="2025-05-17T00:22:29.148574021Z" level=info msg="StopPodSandbox for \"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\"" May 17 00:22:29.149061 containerd[1701]: time="2025-05-17T00:22:29.148833324Z" level=info msg="Ensure that sandbox 9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820 in task-service has been cleanup successfully" May 17 00:22:29.157908 kubelet[3166]: I0517 00:22:29.157786 3166 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" May 17 00:22:29.159977 containerd[1701]: time="2025-05-17T00:22:29.159939516Z" level=info msg="StopPodSandbox for \"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\"" May 17 00:22:29.161094 containerd[1701]: time="2025-05-17T00:22:29.161061325Z" level=info msg="Ensure that sandbox cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa in task-service has been cleanup successfully" May 17 00:22:29.164941 kubelet[3166]: I0517 00:22:29.164540 3166 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" May 17 00:22:29.168762 containerd[1701]: time="2025-05-17T00:22:29.168589288Z" level=info msg="StopPodSandbox for \"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\"" May 17 00:22:29.176629 containerd[1701]: time="2025-05-17T00:22:29.176578854Z" level=info msg="Ensure that sandbox e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39 in task-service has been cleanup successfully" May 17 00:22:29.189531 kubelet[3166]: I0517 00:22:29.189497 3166 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" May 17 00:22:29.223778 containerd[1701]: time="2025-05-17T00:22:29.223719646Z" level=info msg="StopPodSandbox for \"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\"" May 17 00:22:29.229894 containerd[1701]: time="2025-05-17T00:22:29.229738497Z" level=info msg="Ensure that sandbox 447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b in task-service has been cleanup successfully" May 17 00:22:29.263092 containerd[1701]: time="2025-05-17T00:22:29.262524369Z" level=error msg="StopPodSandbox for \"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\" failed" error="failed to destroy network for sandbox \"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.263888 kubelet[3166]: E0517 00:22:29.262809 3166 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" May 17 00:22:29.263888 kubelet[3166]: E0517 00:22:29.262887 3166 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05"} May 17 00:22:29.263888 kubelet[3166]: E0517 00:22:29.262962 3166 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e6bfc34-e1e7-4777-bb96-96b800f3b1da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:22:29.263888 kubelet[3166]: E0517 00:22:29.262998 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e6bfc34-e1e7-4777-bb96-96b800f3b1da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-9pgd9" podUID="8e6bfc34-e1e7-4777-bb96-96b800f3b1da" May 17 00:22:29.288367 containerd[1701]: time="2025-05-17T00:22:29.288281684Z" level=error msg="StopPodSandbox for \"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\" failed" error="failed to destroy network for sandbox \"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.288632 kubelet[3166]: E0517 00:22:29.288595 3166 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" May 17 00:22:29.288812 kubelet[3166]: E0517 00:22:29.288753 3166 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b"} May 17 00:22:29.288812 kubelet[3166]: E0517 00:22:29.288808 3166 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"29b7b6b7-83d4-4805-ba4b-48795aa317b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:22:29.289047 kubelet[3166]: E0517 00:22:29.288839 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"29b7b6b7-83d4-4805-ba4b-48795aa317b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-cfb5fd58c-rrrg6" podUID="29b7b6b7-83d4-4805-ba4b-48795aa317b5" May 17 00:22:29.308555 containerd[1701]: time="2025-05-17T00:22:29.308484352Z" level=error msg="StopPodSandbox for \"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\" failed" error="failed to destroy network for sandbox \"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.309231 kubelet[3166]: E0517 00:22:29.309017 3166 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" May 17 00:22:29.309231 kubelet[3166]: E0517 00:22:29.309086 3166 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0"} May 17 00:22:29.309231 kubelet[3166]: E0517 00:22:29.309128 3166 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6b509eee-7c18-42a2-a9a2-d0bc267486dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:22:29.309231 kubelet[3166]: E0517 00:22:29.309163 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6b509eee-7c18-42a2-a9a2-d0bc267486dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c46c46bf5-9vfk6" podUID="6b509eee-7c18-42a2-a9a2-d0bc267486dd" May 17 00:22:29.321696 containerd[1701]: time="2025-05-17T00:22:29.321506760Z" level=error msg="StopPodSandbox for \"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\" failed" error="failed to destroy network for sandbox \"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.322523 kubelet[3166]: E0517 00:22:29.321888 3166 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" May 17 00:22:29.322523 kubelet[3166]: E0517 00:22:29.321939 3166 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d"} May 17 00:22:29.322523 kubelet[3166]: E0517 00:22:29.321978 3166 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d30fdcd3-ada9-4b49-b2d5-3ba22cf562c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:22:29.322523 kubelet[3166]: E0517 00:22:29.322008 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d30fdcd3-ada9-4b49-b2d5-3ba22cf562c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c46c46bf5-sgprk" podUID="d30fdcd3-ada9-4b49-b2d5-3ba22cf562c8" May 17 00:22:29.324147 containerd[1701]: time="2025-05-17T00:22:29.323923780Z" level=error msg="StopPodSandbox for \"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\" failed" error="failed to destroy network for sandbox \"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.325275 kubelet[3166]: E0517 00:22:29.324966 3166 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" May 17 00:22:29.325275 kubelet[3166]: E0517 00:22:29.325145 3166 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa"} May 17 00:22:29.325275 kubelet[3166]: E0517 00:22:29.325204 3166 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a9086985-91bc-4199-9a6a-aa40312930e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:22:29.325275 kubelet[3166]: E0517 00:22:29.325236 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a9086985-91bc-4199-9a6a-aa40312930e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6jwzq" podUID="a9086985-91bc-4199-9a6a-aa40312930e7" May 17 00:22:29.338983 containerd[1701]: time="2025-05-17T00:22:29.338876804Z" level=error msg="StopPodSandbox for \"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\" failed" error="failed to destroy network for sandbox \"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.339850 kubelet[3166]: E0517 00:22:29.339627 3166 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" May 17 00:22:29.339850 kubelet[3166]: E0517 00:22:29.339795 3166 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6"} May 17 00:22:29.340363 kubelet[3166]: E0517 00:22:29.340040 3166 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"062cbe61-e0c7-468e-8be4-9dc29bebfa6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:22:29.340363 kubelet[3166]: E0517 00:22:29.340096 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"062cbe61-e0c7-468e-8be4-9dc29bebfa6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q5tzj" podUID="062cbe61-e0c7-468e-8be4-9dc29bebfa6f" May 17 00:22:29.344586 containerd[1701]: time="2025-05-17T00:22:29.344529751Z" level=error msg="StopPodSandbox for \"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\" failed" error="failed to destroy network for sandbox \"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.345004 kubelet[3166]: E0517 00:22:29.344962 3166 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" May 17 00:22:29.345300 kubelet[3166]: E0517 00:22:29.345141 3166 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39"} May 17 00:22:29.345300 kubelet[3166]: E0517 00:22:29.345206 3166 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7fb9aa5a-970a-4e82-b193-535f4a3ef021\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:22:29.345300 kubelet[3166]: E0517 00:22:29.345265 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7fb9aa5a-970a-4e82-b193-535f4a3ef021\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-z6qms" podUID="7fb9aa5a-970a-4e82-b193-535f4a3ef021" May 17 00:22:29.346011 containerd[1701]: time="2025-05-17T00:22:29.345973063Z" level=error msg="StopPodSandbox for \"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\" failed" error="failed to destroy network for sandbox \"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:29.346237 kubelet[3166]: E0517 00:22:29.346181 3166 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" May 17 00:22:29.346317 kubelet[3166]: E0517 00:22:29.346269 3166 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820"} May 17 00:22:29.346370 kubelet[3166]: E0517 00:22:29.346309 3166 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"13021eae-c635-4ef1-8524-560d7fa13a2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:22:29.346370 kubelet[3166]: E0517 00:22:29.346342 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"13021eae-c635-4ef1-8524-560d7fa13a2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d55675d8-rdf7k" podUID="13021eae-c635-4ef1-8524-560d7fa13a2f" May 17 00:22:29.494123 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6-shm.mount: Deactivated successfully. May 17 00:22:35.264346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2668086856.mount: Deactivated successfully. May 17 00:22:35.310364 containerd[1701]: time="2025-05-17T00:22:35.310304333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:35.313993 containerd[1701]: time="2025-05-17T00:22:35.313916361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 17 00:22:35.318710 containerd[1701]: time="2025-05-17T00:22:35.318648699Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:35.323003 containerd[1701]: time="2025-05-17T00:22:35.322936632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:35.325812 containerd[1701]: time="2025-05-17T00:22:35.325000749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 6.180168058s" May 17 00:22:35.325812 containerd[1701]: time="2025-05-17T00:22:35.325059949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:22:35.350445 containerd[1701]: time="2025-05-17T00:22:35.350400248Z" level=info msg="CreateContainer within sandbox \"e096faf7bb8a887d3779e340cd79408e0d3f8be5e13435afe4215c49694a1575\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:22:35.395033 containerd[1701]: time="2025-05-17T00:22:35.394976898Z" level=info msg="CreateContainer within sandbox \"e096faf7bb8a887d3779e340cd79408e0d3f8be5e13435afe4215c49694a1575\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"892b3c31f56c7e378390d4d1ac4f8e168e47e217e921e158c0aca04cb215a840\"" May 17 00:22:35.397224 containerd[1701]: time="2025-05-17T00:22:35.395706304Z" level=info msg="StartContainer for \"892b3c31f56c7e378390d4d1ac4f8e168e47e217e921e158c0aca04cb215a840\"" May 17 00:22:35.424342 systemd[1]: Started cri-containerd-892b3c31f56c7e378390d4d1ac4f8e168e47e217e921e158c0aca04cb215a840.scope - libcontainer container 892b3c31f56c7e378390d4d1ac4f8e168e47e217e921e158c0aca04cb215a840. May 17 00:22:35.457035 containerd[1701]: time="2025-05-17T00:22:35.456793783Z" level=info msg="StartContainer for \"892b3c31f56c7e378390d4d1ac4f8e168e47e217e921e158c0aca04cb215a840\" returns successfully" May 17 00:22:35.843291 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:22:35.843475 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:22:35.972338 containerd[1701]: time="2025-05-17T00:22:35.972284631Z" level=info msg="StopPodSandbox for \"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\"" May 17 00:22:36.125933 containerd[1701]: 2025-05-17 00:22:36.069 [INFO][4389] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" May 17 00:22:36.125933 containerd[1701]: 2025-05-17 00:22:36.069 [INFO][4389] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" iface="eth0" netns="/var/run/netns/cni-8e873838-0847-7aa2-6d64-67649491410f" May 17 00:22:36.125933 containerd[1701]: 2025-05-17 00:22:36.070 [INFO][4389] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" iface="eth0" netns="/var/run/netns/cni-8e873838-0847-7aa2-6d64-67649491410f" May 17 00:22:36.125933 containerd[1701]: 2025-05-17 00:22:36.070 [INFO][4389] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" iface="eth0" netns="/var/run/netns/cni-8e873838-0847-7aa2-6d64-67649491410f" May 17 00:22:36.125933 containerd[1701]: 2025-05-17 00:22:36.070 [INFO][4389] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" May 17 00:22:36.125933 containerd[1701]: 2025-05-17 00:22:36.070 [INFO][4389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" May 17 00:22:36.125933 containerd[1701]: 2025-05-17 00:22:36.107 [INFO][4401] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" HandleID="k8s-pod-network.447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-whisker--cfb5fd58c--rrrg6-eth0" May 17 00:22:36.125933 containerd[1701]: 2025-05-17 00:22:36.108 [INFO][4401] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:36.125933 containerd[1701]: 2025-05-17 00:22:36.108 [INFO][4401] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:36.125933 containerd[1701]: 2025-05-17 00:22:36.117 [WARNING][4401] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" HandleID="k8s-pod-network.447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-whisker--cfb5fd58c--rrrg6-eth0" May 17 00:22:36.125933 containerd[1701]: 2025-05-17 00:22:36.117 [INFO][4401] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" HandleID="k8s-pod-network.447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-whisker--cfb5fd58c--rrrg6-eth0" May 17 00:22:36.125933 containerd[1701]: 2025-05-17 00:22:36.120 [INFO][4401] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:36.125933 containerd[1701]: 2025-05-17 00:22:36.123 [INFO][4389] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" May 17 00:22:36.125933 containerd[1701]: time="2025-05-17T00:22:36.125630035Z" level=info msg="TearDown network for sandbox \"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\" successfully" May 17 00:22:36.125933 containerd[1701]: time="2025-05-17T00:22:36.125666436Z" level=info msg="StopPodSandbox for \"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\" returns successfully" May 17 00:22:36.163234 kubelet[3166]: I0517 00:22:36.162481 3166 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn4dx\" (UniqueName: \"kubernetes.io/projected/29b7b6b7-83d4-4805-ba4b-48795aa317b5-kube-api-access-bn4dx\") pod \"29b7b6b7-83d4-4805-ba4b-48795aa317b5\" (UID: \"29b7b6b7-83d4-4805-ba4b-48795aa317b5\") " May 17 00:22:36.163234 kubelet[3166]: I0517 00:22:36.162544 3166 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/29b7b6b7-83d4-4805-ba4b-48795aa317b5-whisker-backend-key-pair\") pod \"29b7b6b7-83d4-4805-ba4b-48795aa317b5\" (UID: \"29b7b6b7-83d4-4805-ba4b-48795aa317b5\") " May 17 00:22:36.163234 kubelet[3166]: I0517 00:22:36.162582 3166 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29b7b6b7-83d4-4805-ba4b-48795aa317b5-whisker-ca-bundle\") pod \"29b7b6b7-83d4-4805-ba4b-48795aa317b5\" (UID: \"29b7b6b7-83d4-4805-ba4b-48795aa317b5\") " May 17 00:22:36.163234 kubelet[3166]: I0517 00:22:36.163086 3166 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b7b6b7-83d4-4805-ba4b-48795aa317b5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "29b7b6b7-83d4-4805-ba4b-48795aa317b5" (UID: "29b7b6b7-83d4-4805-ba4b-48795aa317b5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:22:36.170500 kubelet[3166]: I0517 00:22:36.170434 3166 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b7b6b7-83d4-4805-ba4b-48795aa317b5-kube-api-access-bn4dx" (OuterVolumeSpecName: "kube-api-access-bn4dx") pod "29b7b6b7-83d4-4805-ba4b-48795aa317b5" (UID: "29b7b6b7-83d4-4805-ba4b-48795aa317b5"). InnerVolumeSpecName "kube-api-access-bn4dx". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:22:36.175649 kubelet[3166]: I0517 00:22:36.175507 3166 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b7b6b7-83d4-4805-ba4b-48795aa317b5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "29b7b6b7-83d4-4805-ba4b-48795aa317b5" (UID: "29b7b6b7-83d4-4805-ba4b-48795aa317b5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:22:36.213614 systemd[1]: Removed slice kubepods-besteffort-pod29b7b6b7_83d4_4805_ba4b_48795aa317b5.slice - libcontainer container kubepods-besteffort-pod29b7b6b7_83d4_4805_ba4b_48795aa317b5.slice. May 17 00:22:36.263711 kubelet[3166]: I0517 00:22:36.263650 3166 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bn4dx\" (UniqueName: \"kubernetes.io/projected/29b7b6b7-83d4-4805-ba4b-48795aa317b5-kube-api-access-bn4dx\") on node \"ci-4081.3.3-n-4e81e33f0f\" DevicePath \"\"" May 17 00:22:36.263711 kubelet[3166]: I0517 00:22:36.263700 3166 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/29b7b6b7-83d4-4805-ba4b-48795aa317b5-whisker-backend-key-pair\") on node \"ci-4081.3.3-n-4e81e33f0f\" DevicePath \"\"" May 17 00:22:36.263711 kubelet[3166]: I0517 00:22:36.263716 3166 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29b7b6b7-83d4-4805-ba4b-48795aa317b5-whisker-ca-bundle\") on node \"ci-4081.3.3-n-4e81e33f0f\" DevicePath \"\"" May 17 00:22:36.266782 systemd[1]: run-netns-cni\x2d8e873838\x2d0847\x2d7aa2\x2d6d64\x2d67649491410f.mount: Deactivated successfully. May 17 00:22:36.268367 systemd[1]: var-lib-kubelet-pods-29b7b6b7\x2d83d4\x2d4805\x2dba4b\x2d48795aa317b5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbn4dx.mount: Deactivated successfully. May 17 00:22:36.268472 systemd[1]: var-lib-kubelet-pods-29b7b6b7\x2d83d4\x2d4805\x2dba4b\x2d48795aa317b5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:22:36.313223 kubelet[3166]: I0517 00:22:36.311738 3166 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kd2n2" podStartSLOduration=2.128446208 podStartE2EDuration="22.311711397s" podCreationTimestamp="2025-05-17 00:22:14 +0000 UTC" firstStartedPulling="2025-05-17 00:22:15.143803176 +0000 UTC m=+19.304329313" lastFinishedPulling="2025-05-17 00:22:35.327068365 +0000 UTC m=+39.487594502" observedRunningTime="2025-05-17 00:22:36.289142419 +0000 UTC m=+40.449668456" watchObservedRunningTime="2025-05-17 00:22:36.311711397 +0000 UTC m=+40.472237534" May 17 00:22:36.331595 systemd[1]: Created slice kubepods-besteffort-pod5e5288d0_1e2c_4d58_be59_3922b5edffe0.slice - libcontainer container kubepods-besteffort-pod5e5288d0_1e2c_4d58_be59_3922b5edffe0.slice. May 17 00:22:36.364637 kubelet[3166]: I0517 00:22:36.364589 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e5288d0-1e2c-4d58-be59-3922b5edffe0-whisker-ca-bundle\") pod \"whisker-6ffb5d99bc-xn6zf\" (UID: \"5e5288d0-1e2c-4d58-be59-3922b5edffe0\") " pod="calico-system/whisker-6ffb5d99bc-xn6zf" May 17 00:22:36.365072 kubelet[3166]: I0517 00:22:36.365003 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbk6k\" (UniqueName: \"kubernetes.io/projected/5e5288d0-1e2c-4d58-be59-3922b5edffe0-kube-api-access-jbk6k\") pod \"whisker-6ffb5d99bc-xn6zf\" (UID: \"5e5288d0-1e2c-4d58-be59-3922b5edffe0\") " pod="calico-system/whisker-6ffb5d99bc-xn6zf" May 17 00:22:36.365706 kubelet[3166]: I0517 00:22:36.365672 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5e5288d0-1e2c-4d58-be59-3922b5edffe0-whisker-backend-key-pair\") pod \"whisker-6ffb5d99bc-xn6zf\" (UID: \"5e5288d0-1e2c-4d58-be59-3922b5edffe0\") " pod="calico-system/whisker-6ffb5d99bc-xn6zf" May 17 00:22:36.639278 containerd[1701]: time="2025-05-17T00:22:36.639163468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ffb5d99bc-xn6zf,Uid:5e5288d0-1e2c-4d58-be59-3922b5edffe0,Namespace:calico-system,Attempt:0,}" May 17 00:22:36.794588 systemd-networkd[1543]: caliaf8725feab8: Link UP May 17 00:22:36.796936 systemd-networkd[1543]: caliaf8725feab8: Gained carrier May 17 00:22:36.814781 containerd[1701]: 2025-05-17 00:22:36.704 [INFO][4423] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:22:36.814781 containerd[1701]: 2025-05-17 00:22:36.714 [INFO][4423] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--4e81e33f0f-k8s-whisker--6ffb5d99bc--xn6zf-eth0 whisker-6ffb5d99bc- calico-system 5e5288d0-1e2c-4d58-be59-3922b5edffe0 935 0 2025-05-17 00:22:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6ffb5d99bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.3-n-4e81e33f0f whisker-6ffb5d99bc-xn6zf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliaf8725feab8 [] [] }} ContainerID="fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5" Namespace="calico-system" Pod="whisker-6ffb5d99bc-xn6zf" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-whisker--6ffb5d99bc--xn6zf-" May 17 00:22:36.814781 containerd[1701]: 2025-05-17 00:22:36.714 [INFO][4423] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5" Namespace="calico-system" Pod="whisker-6ffb5d99bc-xn6zf" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-whisker--6ffb5d99bc--xn6zf-eth0" May 17 00:22:36.814781 containerd[1701]: 2025-05-17 00:22:36.742 [INFO][4436] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5" HandleID="k8s-pod-network.fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-whisker--6ffb5d99bc--xn6zf-eth0" May 17 00:22:36.814781 containerd[1701]: 2025-05-17 00:22:36.743 [INFO][4436] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5" HandleID="k8s-pod-network.fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-whisker--6ffb5d99bc--xn6zf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9990), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-4e81e33f0f", "pod":"whisker-6ffb5d99bc-xn6zf", "timestamp":"2025-05-17 00:22:36.742907583 +0000 UTC"}, Hostname:"ci-4081.3.3-n-4e81e33f0f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:22:36.814781 containerd[1701]: 2025-05-17 00:22:36.743 [INFO][4436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:36.814781 containerd[1701]: 2025-05-17 00:22:36.743 [INFO][4436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:36.814781 containerd[1701]: 2025-05-17 00:22:36.743 [INFO][4436] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-4e81e33f0f' May 17 00:22:36.814781 containerd[1701]: 2025-05-17 00:22:36.751 [INFO][4436] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:36.814781 containerd[1701]: 2025-05-17 00:22:36.763 [INFO][4436] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:36.814781 containerd[1701]: 2025-05-17 00:22:36.767 [INFO][4436] ipam/ipam.go 511: Trying affinity for 192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:36.814781 containerd[1701]: 2025-05-17 00:22:36.769 [INFO][4436] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:36.814781 containerd[1701]: 2025-05-17 00:22:36.771 [INFO][4436] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:36.814781 containerd[1701]: 2025-05-17 00:22:36.771 [INFO][4436] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:36.814781 containerd[1701]: 2025-05-17 00:22:36.772 [INFO][4436] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5 May 17 00:22:36.814781 containerd[1701]: 2025-05-17 00:22:36.776 [INFO][4436] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:36.814781 containerd[1701]: 2025-05-17 00:22:36.785 [INFO][4436] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.29.129/26] block=192.168.29.128/26 handle="k8s-pod-network.fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:36.814781 containerd[1701]: 2025-05-17 00:22:36.786 [INFO][4436] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.129/26] handle="k8s-pod-network.fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:36.814781 containerd[1701]: 2025-05-17 00:22:36.786 [INFO][4436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:36.814781 containerd[1701]: 2025-05-17 00:22:36.786 [INFO][4436] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.129/26] IPv6=[] ContainerID="fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5" HandleID="k8s-pod-network.fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-whisker--6ffb5d99bc--xn6zf-eth0" May 17 00:22:36.815945 containerd[1701]: 2025-05-17 00:22:36.787 [INFO][4423] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5" Namespace="calico-system" Pod="whisker-6ffb5d99bc-xn6zf" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-whisker--6ffb5d99bc--xn6zf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-whisker--6ffb5d99bc--xn6zf-eth0", GenerateName:"whisker-6ffb5d99bc-", Namespace:"calico-system", SelfLink:"", UID:"5e5288d0-1e2c-4d58-be59-3922b5edffe0", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6ffb5d99bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"", Pod:"whisker-6ffb5d99bc-xn6zf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.29.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliaf8725feab8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:36.815945 containerd[1701]: 2025-05-17 00:22:36.787 [INFO][4423] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.129/32] ContainerID="fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5" Namespace="calico-system" Pod="whisker-6ffb5d99bc-xn6zf" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-whisker--6ffb5d99bc--xn6zf-eth0" May 17 00:22:36.815945 containerd[1701]: 2025-05-17 00:22:36.787 [INFO][4423] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf8725feab8 ContainerID="fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5" Namespace="calico-system" Pod="whisker-6ffb5d99bc-xn6zf" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-whisker--6ffb5d99bc--xn6zf-eth0" May 17 00:22:36.815945 containerd[1701]: 2025-05-17 00:22:36.794 [INFO][4423] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5" Namespace="calico-system" Pod="whisker-6ffb5d99bc-xn6zf" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-whisker--6ffb5d99bc--xn6zf-eth0" May 17 00:22:36.815945 containerd[1701]: 2025-05-17 00:22:36.796 [INFO][4423] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5" Namespace="calico-system" Pod="whisker-6ffb5d99bc-xn6zf" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-whisker--6ffb5d99bc--xn6zf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-whisker--6ffb5d99bc--xn6zf-eth0", GenerateName:"whisker-6ffb5d99bc-", Namespace:"calico-system", SelfLink:"", UID:"5e5288d0-1e2c-4d58-be59-3922b5edffe0", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6ffb5d99bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5", Pod:"whisker-6ffb5d99bc-xn6zf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.29.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliaf8725feab8", MAC:"c6:3f:3c:c5:cd:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:36.815945 containerd[1701]: 2025-05-17 00:22:36.812 [INFO][4423] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5" Namespace="calico-system" Pod="whisker-6ffb5d99bc-xn6zf" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-whisker--6ffb5d99bc--xn6zf-eth0" May 17 00:22:36.836470 containerd[1701]: time="2025-05-17T00:22:36.836294116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:36.836470 containerd[1701]: time="2025-05-17T00:22:36.836365317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:36.836470 containerd[1701]: time="2025-05-17T00:22:36.836410217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:36.836838 containerd[1701]: time="2025-05-17T00:22:36.836509618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:36.863414 systemd[1]: Started cri-containerd-fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5.scope - libcontainer container fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5. May 17 00:22:36.903860 containerd[1701]: time="2025-05-17T00:22:36.903726045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ffb5d99bc-xn6zf,Uid:5e5288d0-1e2c-4d58-be59-3922b5edffe0,Namespace:calico-system,Attempt:0,} returns sandbox id \"fa228d77554a75969338af6005bff0edcc6f654888a2548fe327877c399200a5\"" May 17 00:22:36.906751 containerd[1701]: time="2025-05-17T00:22:36.906596768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:22:37.073580 containerd[1701]: time="2025-05-17T00:22:37.073433278Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:37.077127 containerd[1701]: time="2025-05-17T00:22:37.076975406Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:37.077127 containerd[1701]: time="2025-05-17T00:22:37.077022406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:22:37.077397 kubelet[3166]: E0517 00:22:37.077349 3166 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:22:37.077590 kubelet[3166]: E0517 00:22:37.077423 3166 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:22:37.077727 kubelet[3166]: E0517 00:22:37.077648 3166 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cc198a9d9a44458f955616eaa1e83f23,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jbk6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6ffb5d99bc-xn6zf_calico-system(5e5288d0-1e2c-4d58-be59-3922b5edffe0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:37.080247 containerd[1701]: time="2025-05-17T00:22:37.080215631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:22:37.244660 containerd[1701]: time="2025-05-17T00:22:37.244593522Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:37.247118 containerd[1701]: time="2025-05-17T00:22:37.247061942Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:37.247289 containerd[1701]: time="2025-05-17T00:22:37.247181742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:22:37.247634 kubelet[3166]: E0517 00:22:37.247565 3166 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:22:37.248159 kubelet[3166]: E0517 00:22:37.247647 3166 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:22:37.248306 kubelet[3166]: E0517 00:22:37.247845 3166 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbk6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6ffb5d99bc-xn6zf_calico-system(5e5288d0-1e2c-4d58-be59-3922b5edffe0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:37.249523 kubelet[3166]: E0517 00:22:37.249383 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6ffb5d99bc-xn6zf" podUID="5e5288d0-1e2c-4d58-be59-3922b5edffe0" May 17 00:22:37.794385 kernel: bpftool[4608]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:22:37.896075 kubelet[3166]: I0517 00:22:37.895478 3166 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:22:37.935550 systemd[1]: run-containerd-runc-k8s.io-892b3c31f56c7e378390d4d1ac4f8e168e47e217e921e158c0aca04cb215a840-runc.R7mdk4.mount: Deactivated successfully. May 17 00:22:37.966670 kubelet[3166]: I0517 00:22:37.966548 3166 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29b7b6b7-83d4-4805-ba4b-48795aa317b5" path="/var/lib/kubelet/pods/29b7b6b7-83d4-4805-ba4b-48795aa317b5/volumes" May 17 00:22:38.128601 systemd-networkd[1543]: caliaf8725feab8: Gained IPv6LL May 17 00:22:38.194856 systemd-networkd[1543]: vxlan.calico: Link UP May 17 00:22:38.194868 systemd-networkd[1543]: vxlan.calico: Gained carrier May 17 00:22:38.219582 kubelet[3166]: E0517 00:22:38.217610 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6ffb5d99bc-xn6zf" podUID="5e5288d0-1e2c-4d58-be59-3922b5edffe0" May 17 00:22:39.472539 systemd-networkd[1543]: vxlan.calico: Gained IPv6LL May 17 00:22:40.963295 containerd[1701]: time="2025-05-17T00:22:40.961595297Z" level=info msg="StopPodSandbox for \"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\"" May 17 00:22:41.057276 containerd[1701]: 2025-05-17 00:22:41.021 [INFO][4739] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" May 17 00:22:41.057276 containerd[1701]: 2025-05-17 00:22:41.023 [INFO][4739] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" iface="eth0" netns="/var/run/netns/cni-8d9a8790-3ff0-0566-7792-f0d1e7a3909e" May 17 00:22:41.057276 containerd[1701]: 2025-05-17 00:22:41.023 [INFO][4739] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" iface="eth0" netns="/var/run/netns/cni-8d9a8790-3ff0-0566-7792-f0d1e7a3909e" May 17 00:22:41.057276 containerd[1701]: 2025-05-17 00:22:41.024 [INFO][4739] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" iface="eth0" netns="/var/run/netns/cni-8d9a8790-3ff0-0566-7792-f0d1e7a3909e" May 17 00:22:41.057276 containerd[1701]: 2025-05-17 00:22:41.024 [INFO][4739] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" May 17 00:22:41.057276 containerd[1701]: 2025-05-17 00:22:41.024 [INFO][4739] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" May 17 00:22:41.057276 containerd[1701]: 2025-05-17 00:22:41.046 [INFO][4746] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" HandleID="k8s-pod-network.acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0" May 17 00:22:41.057276 containerd[1701]: 2025-05-17 00:22:41.046 [INFO][4746] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:41.057276 containerd[1701]: 2025-05-17 00:22:41.046 [INFO][4746] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:41.057276 containerd[1701]: 2025-05-17 00:22:41.053 [WARNING][4746] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" HandleID="k8s-pod-network.acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0" May 17 00:22:41.057276 containerd[1701]: 2025-05-17 00:22:41.053 [INFO][4746] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" HandleID="k8s-pod-network.acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0" May 17 00:22:41.057276 containerd[1701]: 2025-05-17 00:22:41.054 [INFO][4746] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:41.057276 containerd[1701]: 2025-05-17 00:22:41.056 [INFO][4739] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" May 17 00:22:41.059444 containerd[1701]: time="2025-05-17T00:22:41.059317560Z" level=info msg="TearDown network for sandbox \"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\" successfully" May 17 00:22:41.059444 containerd[1701]: time="2025-05-17T00:22:41.059360660Z" level=info msg="StopPodSandbox for \"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\" returns successfully" May 17 00:22:41.061993 containerd[1701]: time="2025-05-17T00:22:41.061257261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q5tzj,Uid:062cbe61-e0c7-468e-8be4-9dc29bebfa6f,Namespace:calico-system,Attempt:1,}" May 17 00:22:41.063009 systemd[1]: run-netns-cni\x2d8d9a8790\x2d3ff0\x2d0566\x2d7792\x2df0d1e7a3909e.mount: Deactivated successfully. May 17 00:22:41.221315 systemd-networkd[1543]: calic27cfc188c2: Link UP May 17 00:22:41.221605 systemd-networkd[1543]: calic27cfc188c2: Gained carrier May 17 00:22:41.250591 containerd[1701]: 2025-05-17 00:22:41.145 [INFO][4752] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0 csi-node-driver- calico-system 062cbe61-e0c7-468e-8be4-9dc29bebfa6f 968 0 2025-05-17 00:22:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-n-4e81e33f0f csi-node-driver-q5tzj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic27cfc188c2 [] [] }} ContainerID="ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10" Namespace="calico-system" Pod="csi-node-driver-q5tzj" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-" May 17 00:22:41.250591 containerd[1701]: 2025-05-17 00:22:41.146 [INFO][4752] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10" Namespace="calico-system" Pod="csi-node-driver-q5tzj" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0" May 17 00:22:41.250591 containerd[1701]: 2025-05-17 00:22:41.171 [INFO][4764] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10" HandleID="k8s-pod-network.ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0" May 17 00:22:41.250591 containerd[1701]: 2025-05-17 00:22:41.171 [INFO][4764] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10" HandleID="k8s-pod-network.ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000233280), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-4e81e33f0f", "pod":"csi-node-driver-q5tzj", "timestamp":"2025-05-17 00:22:41.171087632 +0000 UTC"}, Hostname:"ci-4081.3.3-n-4e81e33f0f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:22:41.250591 containerd[1701]: 2025-05-17 00:22:41.171 [INFO][4764] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:41.250591 containerd[1701]: 2025-05-17 00:22:41.171 [INFO][4764] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:41.250591 containerd[1701]: 2025-05-17 00:22:41.171 [INFO][4764] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-4e81e33f0f' May 17 00:22:41.250591 containerd[1701]: 2025-05-17 00:22:41.179 [INFO][4764] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:41.250591 containerd[1701]: 2025-05-17 00:22:41.186 [INFO][4764] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:41.250591 containerd[1701]: 2025-05-17 00:22:41.191 [INFO][4764] ipam/ipam.go 511: Trying affinity for 192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:41.250591 containerd[1701]: 2025-05-17 00:22:41.193 [INFO][4764] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:41.250591 containerd[1701]: 2025-05-17 00:22:41.195 [INFO][4764] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:41.250591 containerd[1701]: 2025-05-17 00:22:41.195 [INFO][4764] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:41.250591 containerd[1701]: 2025-05-17 00:22:41.197 [INFO][4764] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10 May 17 00:22:41.250591 containerd[1701]: 2025-05-17 00:22:41.203 [INFO][4764] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:41.250591 containerd[1701]: 2025-05-17 00:22:41.215 [INFO][4764] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.29.130/26] block=192.168.29.128/26 handle="k8s-pod-network.ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:41.250591 containerd[1701]: 2025-05-17 00:22:41.215 [INFO][4764] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.130/26] handle="k8s-pod-network.ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:41.250591 containerd[1701]: 2025-05-17 00:22:41.215 [INFO][4764] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:41.250591 containerd[1701]: 2025-05-17 00:22:41.215 [INFO][4764] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.130/26] IPv6=[] ContainerID="ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10" HandleID="k8s-pod-network.ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0" May 17 00:22:41.253506 containerd[1701]: 2025-05-17 00:22:41.217 [INFO][4752] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10" Namespace="calico-system" Pod="csi-node-driver-q5tzj" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"062cbe61-e0c7-468e-8be4-9dc29bebfa6f", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"", Pod:"csi-node-driver-q5tzj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic27cfc188c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:41.253506 containerd[1701]: 2025-05-17 00:22:41.217 [INFO][4752] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.130/32] ContainerID="ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10" Namespace="calico-system" Pod="csi-node-driver-q5tzj" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0" May 17 00:22:41.253506 containerd[1701]: 2025-05-17 00:22:41.217 [INFO][4752] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic27cfc188c2 ContainerID="ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10" Namespace="calico-system" Pod="csi-node-driver-q5tzj" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0" May 17 00:22:41.253506 containerd[1701]: 2025-05-17 00:22:41.222 [INFO][4752] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10" Namespace="calico-system" Pod="csi-node-driver-q5tzj" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0" May 17 00:22:41.253506 containerd[1701]: 2025-05-17 00:22:41.224 [INFO][4752] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10" Namespace="calico-system" Pod="csi-node-driver-q5tzj" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"062cbe61-e0c7-468e-8be4-9dc29bebfa6f", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10", Pod:"csi-node-driver-q5tzj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic27cfc188c2", MAC:"aa:8a:53:3e:72:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:41.253506 containerd[1701]: 2025-05-17 00:22:41.245 [INFO][4752] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10" Namespace="calico-system" Pod="csi-node-driver-q5tzj" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0" May 17 00:22:41.279980 containerd[1701]: time="2025-05-17T00:22:41.279606502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:41.279980 containerd[1701]: time="2025-05-17T00:22:41.279705002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:41.279980 containerd[1701]: time="2025-05-17T00:22:41.279726302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:41.279980 containerd[1701]: time="2025-05-17T00:22:41.279840902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:41.310413 systemd[1]: Started cri-containerd-ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10.scope - libcontainer container ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10. May 17 00:22:41.337613 containerd[1701]: time="2025-05-17T00:22:41.337566939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q5tzj,Uid:062cbe61-e0c7-468e-8be4-9dc29bebfa6f,Namespace:calico-system,Attempt:1,} returns sandbox id \"ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10\"" May 17 00:22:41.340148 containerd[1701]: time="2025-05-17T00:22:41.340107741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:22:42.731776 containerd[1701]: time="2025-05-17T00:22:42.731705564Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:42.733726 containerd[1701]: time="2025-05-17T00:22:42.733667181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 17 00:22:42.737991 containerd[1701]: time="2025-05-17T00:22:42.737912419Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:42.745309 containerd[1701]: time="2025-05-17T00:22:42.745232284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:42.746499 containerd[1701]: time="2025-05-17T00:22:42.746037391Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.40588275s" May 17 00:22:42.746499 containerd[1701]: time="2025-05-17T00:22:42.746088491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:22:42.753423 containerd[1701]: time="2025-05-17T00:22:42.753379156Z" level=info msg="CreateContainer within sandbox \"ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:22:42.794885 containerd[1701]: time="2025-05-17T00:22:42.794829924Z" level=info msg="CreateContainer within sandbox \"ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6832156f780e219a0e078f1d67ce7739b20d29e882b464736eb17e402fc5c8c4\"" May 17 00:22:42.797219 containerd[1701]: time="2025-05-17T00:22:42.795795233Z" level=info msg="StartContainer for \"6832156f780e219a0e078f1d67ce7739b20d29e882b464736eb17e402fc5c8c4\"" May 17 00:22:42.836455 systemd[1]: Started cri-containerd-6832156f780e219a0e078f1d67ce7739b20d29e882b464736eb17e402fc5c8c4.scope - libcontainer container 6832156f780e219a0e078f1d67ce7739b20d29e882b464736eb17e402fc5c8c4. May 17 00:22:42.874711 containerd[1701]: time="2025-05-17T00:22:42.874657233Z" level=info msg="StartContainer for \"6832156f780e219a0e078f1d67ce7739b20d29e882b464736eb17e402fc5c8c4\" returns successfully" May 17 00:22:42.876754 containerd[1701]: time="2025-05-17T00:22:42.876396449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:22:42.962068 containerd[1701]: time="2025-05-17T00:22:42.961606006Z" level=info msg="StopPodSandbox for \"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\"" May 17 00:22:42.992446 systemd-networkd[1543]: calic27cfc188c2: Gained IPv6LL May 17 00:22:43.054396 containerd[1701]: 2025-05-17 00:22:43.021 [INFO][4869] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" May 17 00:22:43.054396 containerd[1701]: 2025-05-17 00:22:43.021 [INFO][4869] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" iface="eth0" netns="/var/run/netns/cni-e1b092ec-8b3a-038b-5862-a26ae1613a0e" May 17 00:22:43.054396 containerd[1701]: 2025-05-17 00:22:43.021 [INFO][4869] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" iface="eth0" netns="/var/run/netns/cni-e1b092ec-8b3a-038b-5862-a26ae1613a0e" May 17 00:22:43.054396 containerd[1701]: 2025-05-17 00:22:43.021 [INFO][4869] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" iface="eth0" netns="/var/run/netns/cni-e1b092ec-8b3a-038b-5862-a26ae1613a0e" May 17 00:22:43.054396 containerd[1701]: 2025-05-17 00:22:43.022 [INFO][4869] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" May 17 00:22:43.054396 containerd[1701]: 2025-05-17 00:22:43.022 [INFO][4869] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" May 17 00:22:43.054396 containerd[1701]: 2025-05-17 00:22:43.044 [INFO][4876] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" HandleID="k8s-pod-network.e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0" May 17 00:22:43.054396 containerd[1701]: 2025-05-17 00:22:43.044 [INFO][4876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:43.054396 containerd[1701]: 2025-05-17 00:22:43.044 [INFO][4876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:43.054396 containerd[1701]: 2025-05-17 00:22:43.050 [WARNING][4876] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" HandleID="k8s-pod-network.e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0" May 17 00:22:43.054396 containerd[1701]: 2025-05-17 00:22:43.050 [INFO][4876] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" HandleID="k8s-pod-network.e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0" May 17 00:22:43.054396 containerd[1701]: 2025-05-17 00:22:43.051 [INFO][4876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:43.054396 containerd[1701]: 2025-05-17 00:22:43.053 [INFO][4869] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" May 17 00:22:43.056529 containerd[1701]: time="2025-05-17T00:22:43.054599532Z" level=info msg="TearDown network for sandbox \"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\" successfully" May 17 00:22:43.056529 containerd[1701]: time="2025-05-17T00:22:43.054639132Z" level=info msg="StopPodSandbox for \"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\" returns successfully" May 17 00:22:43.056529 containerd[1701]: time="2025-05-17T00:22:43.055439739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-z6qms,Uid:7fb9aa5a-970a-4e82-b193-535f4a3ef021,Namespace:calico-system,Attempt:1,}" May 17 00:22:43.058526 systemd[1]: run-netns-cni\x2de1b092ec\x2d8b3a\x2d038b\x2d5862\x2da26ae1613a0e.mount: Deactivated successfully. May 17 00:22:43.258010 systemd-networkd[1543]: cali7e7ff9966d6: Link UP May 17 00:22:43.260008 systemd-networkd[1543]: cali7e7ff9966d6: Gained carrier May 17 00:22:43.280723 containerd[1701]: 2025-05-17 00:22:43.157 [INFO][4883] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0 goldmane-78d55f7ddc- calico-system 7fb9aa5a-970a-4e82-b193-535f4a3ef021 980 0 2025-05-17 00:22:14 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.3-n-4e81e33f0f goldmane-78d55f7ddc-z6qms eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7e7ff9966d6 [] [] }} ContainerID="40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3" Namespace="calico-system" Pod="goldmane-78d55f7ddc-z6qms" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-" May 17 00:22:43.280723 containerd[1701]: 2025-05-17 00:22:43.157 [INFO][4883] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3" Namespace="calico-system" Pod="goldmane-78d55f7ddc-z6qms" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0" May 17 00:22:43.280723 containerd[1701]: 2025-05-17 00:22:43.202 [INFO][4894] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3" HandleID="k8s-pod-network.40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0" May 17 00:22:43.280723 containerd[1701]: 2025-05-17 00:22:43.202 [INFO][4894] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3" HandleID="k8s-pod-network.40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d98a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-4e81e33f0f", "pod":"goldmane-78d55f7ddc-z6qms", "timestamp":"2025-05-17 00:22:43.202230143 +0000 UTC"}, Hostname:"ci-4081.3.3-n-4e81e33f0f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:22:43.280723 containerd[1701]: 2025-05-17 00:22:43.203 [INFO][4894] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:43.280723 containerd[1701]: 2025-05-17 00:22:43.203 [INFO][4894] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:43.280723 containerd[1701]: 2025-05-17 00:22:43.203 [INFO][4894] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-4e81e33f0f' May 17 00:22:43.280723 containerd[1701]: 2025-05-17 00:22:43.213 [INFO][4894] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:43.280723 containerd[1701]: 2025-05-17 00:22:43.218 [INFO][4894] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:43.280723 containerd[1701]: 2025-05-17 00:22:43.223 [INFO][4894] ipam/ipam.go 511: Trying affinity for 192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:43.280723 containerd[1701]: 2025-05-17 00:22:43.226 [INFO][4894] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:43.280723 containerd[1701]: 2025-05-17 00:22:43.228 [INFO][4894] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:43.280723 containerd[1701]: 2025-05-17 00:22:43.228 [INFO][4894] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:43.280723 containerd[1701]: 2025-05-17 00:22:43.229 [INFO][4894] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3 May 17 00:22:43.280723 containerd[1701]: 2025-05-17 00:22:43.238 [INFO][4894] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:43.280723 containerd[1701]: 2025-05-17 00:22:43.250 [INFO][4894] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.29.131/26] block=192.168.29.128/26 handle="k8s-pod-network.40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:43.280723 containerd[1701]: 2025-05-17 00:22:43.250 [INFO][4894] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.131/26] handle="k8s-pod-network.40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:43.280723 containerd[1701]: 2025-05-17 00:22:43.250 [INFO][4894] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:43.280723 containerd[1701]: 2025-05-17 00:22:43.250 [INFO][4894] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.131/26] IPv6=[] ContainerID="40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3" HandleID="k8s-pod-network.40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0" May 17 00:22:43.281713 containerd[1701]: 2025-05-17 00:22:43.253 [INFO][4883] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3" Namespace="calico-system" Pod="goldmane-78d55f7ddc-z6qms" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"7fb9aa5a-970a-4e82-b193-535f4a3ef021", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"", Pod:"goldmane-78d55f7ddc-z6qms", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7e7ff9966d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:43.281713 containerd[1701]: 2025-05-17 00:22:43.253 [INFO][4883] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.131/32] ContainerID="40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3" Namespace="calico-system" Pod="goldmane-78d55f7ddc-z6qms" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0" May 17 00:22:43.281713 containerd[1701]: 2025-05-17 00:22:43.253 [INFO][4883] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e7ff9966d6 ContainerID="40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3" Namespace="calico-system" Pod="goldmane-78d55f7ddc-z6qms" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0" May 17 00:22:43.281713 containerd[1701]: 2025-05-17 00:22:43.255 [INFO][4883] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3" Namespace="calico-system" Pod="goldmane-78d55f7ddc-z6qms" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0" May 17 00:22:43.281713 containerd[1701]: 2025-05-17 00:22:43.255 [INFO][4883] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3" Namespace="calico-system" Pod="goldmane-78d55f7ddc-z6qms" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"7fb9aa5a-970a-4e82-b193-535f4a3ef021", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3", Pod:"goldmane-78d55f7ddc-z6qms", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7e7ff9966d6", MAC:"66:1e:19:80:a1:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:43.281713 containerd[1701]: 2025-05-17 00:22:43.276 [INFO][4883] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3" Namespace="calico-system" Pod="goldmane-78d55f7ddc-z6qms" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0" May 17 00:22:43.322982 containerd[1701]: time="2025-05-17T00:22:43.322018507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:43.322982 containerd[1701]: time="2025-05-17T00:22:43.322102708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:43.322982 containerd[1701]: time="2025-05-17T00:22:43.322117308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:43.323376 containerd[1701]: time="2025-05-17T00:22:43.323305118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:43.346419 systemd[1]: Started cri-containerd-40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3.scope - libcontainer container 40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3. May 17 00:22:43.417176 containerd[1701]: time="2025-05-17T00:22:43.417094551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-z6qms,Uid:7fb9aa5a-970a-4e82-b193-535f4a3ef021,Namespace:calico-system,Attempt:1,} returns sandbox id \"40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3\"" May 17 00:22:43.965423 containerd[1701]: time="2025-05-17T00:22:43.964297812Z" level=info msg="StopPodSandbox for \"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\"" May 17 00:22:43.965423 containerd[1701]: time="2025-05-17T00:22:43.965035518Z" level=info msg="StopPodSandbox for \"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\"" May 17 00:22:43.968048 containerd[1701]: time="2025-05-17T00:22:43.967927044Z" level=info msg="StopPodSandbox for \"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\"" May 17 00:22:43.971703 containerd[1701]: time="2025-05-17T00:22:43.971558976Z" level=info msg="StopPodSandbox for \"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\"" May 17 00:22:44.236549 containerd[1701]: 2025-05-17 00:22:44.131 [INFO][4999] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" May 17 00:22:44.236549 containerd[1701]: 2025-05-17 00:22:44.131 [INFO][4999] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" iface="eth0" netns="/var/run/netns/cni-b4c2119b-d895-d728-7148-0aa8d0f055ad" May 17 00:22:44.236549 containerd[1701]: 2025-05-17 00:22:44.134 [INFO][4999] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" iface="eth0" netns="/var/run/netns/cni-b4c2119b-d895-d728-7148-0aa8d0f055ad" May 17 00:22:44.236549 containerd[1701]: 2025-05-17 00:22:44.135 [INFO][4999] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" iface="eth0" netns="/var/run/netns/cni-b4c2119b-d895-d728-7148-0aa8d0f055ad" May 17 00:22:44.236549 containerd[1701]: 2025-05-17 00:22:44.135 [INFO][4999] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" May 17 00:22:44.236549 containerd[1701]: 2025-05-17 00:22:44.135 [INFO][4999] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" May 17 00:22:44.236549 containerd[1701]: 2025-05-17 00:22:44.214 [INFO][5021] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" HandleID="k8s-pod-network.a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0" May 17 00:22:44.236549 containerd[1701]: 2025-05-17 00:22:44.214 [INFO][5021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:44.236549 containerd[1701]: 2025-05-17 00:22:44.215 [INFO][5021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:44.236549 containerd[1701]: 2025-05-17 00:22:44.225 [WARNING][5021] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" HandleID="k8s-pod-network.a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0" May 17 00:22:44.236549 containerd[1701]: 2025-05-17 00:22:44.225 [INFO][5021] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" HandleID="k8s-pod-network.a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0" May 17 00:22:44.236549 containerd[1701]: 2025-05-17 00:22:44.229 [INFO][5021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:44.236549 containerd[1701]: 2025-05-17 00:22:44.232 [INFO][4999] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" May 17 00:22:44.242009 containerd[1701]: time="2025-05-17T00:22:44.241950678Z" level=info msg="TearDown network for sandbox \"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\" successfully" May 17 00:22:44.243065 containerd[1701]: time="2025-05-17T00:22:44.243017787Z" level=info msg="StopPodSandbox for \"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\" returns successfully" May 17 00:22:44.244388 systemd[1]: run-netns-cni\x2db4c2119b\x2dd895\x2dd728\x2d7148\x2d0aa8d0f055ad.mount: Deactivated successfully. May 17 00:22:44.245827 containerd[1701]: time="2025-05-17T00:22:44.245788512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9pgd9,Uid:8e6bfc34-e1e7-4777-bb96-96b800f3b1da,Namespace:kube-system,Attempt:1,}" May 17 00:22:44.279788 containerd[1701]: 2025-05-17 00:22:44.151 [INFO][4991] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" May 17 00:22:44.279788 containerd[1701]: 2025-05-17 00:22:44.152 [INFO][4991] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" iface="eth0" netns="/var/run/netns/cni-2444d969-c266-7e58-21e4-7d4ecfa2392b" May 17 00:22:44.279788 containerd[1701]: 2025-05-17 00:22:44.153 [INFO][4991] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" iface="eth0" netns="/var/run/netns/cni-2444d969-c266-7e58-21e4-7d4ecfa2392b" May 17 00:22:44.279788 containerd[1701]: 2025-05-17 00:22:44.153 [INFO][4991] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" iface="eth0" netns="/var/run/netns/cni-2444d969-c266-7e58-21e4-7d4ecfa2392b" May 17 00:22:44.279788 containerd[1701]: 2025-05-17 00:22:44.153 [INFO][4991] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" May 17 00:22:44.279788 containerd[1701]: 2025-05-17 00:22:44.153 [INFO][4991] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" May 17 00:22:44.279788 containerd[1701]: 2025-05-17 00:22:44.254 [INFO][5030] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" HandleID="k8s-pod-network.cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0" May 17 00:22:44.279788 containerd[1701]: 2025-05-17 00:22:44.256 [INFO][5030] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:44.279788 containerd[1701]: 2025-05-17 00:22:44.256 [INFO][5030] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:44.279788 containerd[1701]: 2025-05-17 00:22:44.269 [WARNING][5030] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" HandleID="k8s-pod-network.cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0" May 17 00:22:44.279788 containerd[1701]: 2025-05-17 00:22:44.269 [INFO][5030] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" HandleID="k8s-pod-network.cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0" May 17 00:22:44.279788 containerd[1701]: 2025-05-17 00:22:44.272 [INFO][5030] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:44.279788 containerd[1701]: 2025-05-17 00:22:44.277 [INFO][4991] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" May 17 00:22:44.285227 containerd[1701]: time="2025-05-17T00:22:44.282767741Z" level=info msg="TearDown network for sandbox \"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\" successfully" May 17 00:22:44.285227 containerd[1701]: time="2025-05-17T00:22:44.282814141Z" level=info msg="StopPodSandbox for \"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\" returns successfully" May 17 00:22:44.291411 systemd[1]: run-netns-cni\x2d2444d969\x2dc266\x2d7e58\x2d21e4\x2d7d4ecfa2392b.mount: Deactivated successfully. May 17 00:22:44.293896 containerd[1701]: time="2025-05-17T00:22:44.293835139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6jwzq,Uid:a9086985-91bc-4199-9a6a-aa40312930e7,Namespace:kube-system,Attempt:1,}" May 17 00:22:44.315737 containerd[1701]: 2025-05-17 00:22:44.117 [INFO][4992] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" May 17 00:22:44.315737 containerd[1701]: 2025-05-17 00:22:44.117 [INFO][4992] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" iface="eth0" netns="/var/run/netns/cni-65deee49-17b1-691d-b72c-7dbcfb578f69" May 17 00:22:44.315737 containerd[1701]: 2025-05-17 00:22:44.118 [INFO][4992] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" iface="eth0" netns="/var/run/netns/cni-65deee49-17b1-691d-b72c-7dbcfb578f69" May 17 00:22:44.315737 containerd[1701]: 2025-05-17 00:22:44.121 [INFO][4992] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" iface="eth0" netns="/var/run/netns/cni-65deee49-17b1-691d-b72c-7dbcfb578f69" May 17 00:22:44.315737 containerd[1701]: 2025-05-17 00:22:44.134 [INFO][4992] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" May 17 00:22:44.315737 containerd[1701]: 2025-05-17 00:22:44.134 [INFO][4992] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" May 17 00:22:44.315737 containerd[1701]: 2025-05-17 00:22:44.266 [INFO][5022] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" HandleID="k8s-pod-network.bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0" May 17 00:22:44.315737 containerd[1701]: 2025-05-17 00:22:44.269 [INFO][5022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:44.315737 containerd[1701]: 2025-05-17 00:22:44.272 [INFO][5022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:44.315737 containerd[1701]: 2025-05-17 00:22:44.299 [WARNING][5022] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" HandleID="k8s-pod-network.bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0" May 17 00:22:44.315737 containerd[1701]: 2025-05-17 00:22:44.299 [INFO][5022] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" HandleID="k8s-pod-network.bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0" May 17 00:22:44.315737 containerd[1701]: 2025-05-17 00:22:44.307 [INFO][5022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:44.315737 containerd[1701]: 2025-05-17 00:22:44.314 [INFO][4992] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" May 17 00:22:44.316392 containerd[1701]: time="2025-05-17T00:22:44.315907635Z" level=info msg="TearDown network for sandbox \"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\" successfully" May 17 00:22:44.316392 containerd[1701]: time="2025-05-17T00:22:44.315944535Z" level=info msg="StopPodSandbox for \"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\" returns successfully" May 17 00:22:44.318650 containerd[1701]: time="2025-05-17T00:22:44.317220447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c46c46bf5-sgprk,Uid:d30fdcd3-ada9-4b49-b2d5-3ba22cf562c8,Namespace:calico-apiserver,Attempt:1,}" May 17 00:22:44.342868 containerd[1701]: 2025-05-17 00:22:44.159 [INFO][4984] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" May 17 00:22:44.342868 containerd[1701]: 2025-05-17 00:22:44.160 [INFO][4984] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" iface="eth0" netns="/var/run/netns/cni-94ab5b71-2954-f3b7-2bd2-53c1a0d0a1a4" May 17 00:22:44.342868 containerd[1701]: 2025-05-17 00:22:44.161 [INFO][4984] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" iface="eth0" netns="/var/run/netns/cni-94ab5b71-2954-f3b7-2bd2-53c1a0d0a1a4" May 17 00:22:44.342868 containerd[1701]: 2025-05-17 00:22:44.162 [INFO][4984] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" iface="eth0" netns="/var/run/netns/cni-94ab5b71-2954-f3b7-2bd2-53c1a0d0a1a4" May 17 00:22:44.342868 containerd[1701]: 2025-05-17 00:22:44.162 [INFO][4984] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" May 17 00:22:44.342868 containerd[1701]: 2025-05-17 00:22:44.163 [INFO][4984] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" May 17 00:22:44.342868 containerd[1701]: 2025-05-17 00:22:44.289 [INFO][5032] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" HandleID="k8s-pod-network.9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0" May 17 00:22:44.342868 containerd[1701]: 2025-05-17 00:22:44.289 [INFO][5032] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:44.342868 containerd[1701]: 2025-05-17 00:22:44.308 [INFO][5032] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:44.342868 containerd[1701]: 2025-05-17 00:22:44.328 [WARNING][5032] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" HandleID="k8s-pod-network.9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0" May 17 00:22:44.342868 containerd[1701]: 2025-05-17 00:22:44.330 [INFO][5032] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" HandleID="k8s-pod-network.9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0" May 17 00:22:44.342868 containerd[1701]: 2025-05-17 00:22:44.333 [INFO][5032] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:44.342868 containerd[1701]: 2025-05-17 00:22:44.338 [INFO][4984] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" May 17 00:22:44.344234 containerd[1701]: time="2025-05-17T00:22:44.343898284Z" level=info msg="TearDown network for sandbox \"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\" successfully" May 17 00:22:44.344234 containerd[1701]: time="2025-05-17T00:22:44.343972784Z" level=info msg="StopPodSandbox for \"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\" returns successfully" May 17 00:22:44.347306 containerd[1701]: time="2025-05-17T00:22:44.345795900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d55675d8-rdf7k,Uid:13021eae-c635-4ef1-8524-560d7fa13a2f,Namespace:calico-system,Attempt:1,}" May 17 00:22:44.472052 systemd-networkd[1543]: calic9fa6ceb9b0: Link UP May 17 00:22:44.473390 systemd-networkd[1543]: calic9fa6ceb9b0: Gained carrier May 17 00:22:44.508738 containerd[1701]: 2025-05-17 00:22:44.391 [INFO][5049] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0 coredns-674b8bbfcf- kube-system 8e6bfc34-e1e7-4777-bb96-96b800f3b1da 992 0 2025-05-17 00:22:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-n-4e81e33f0f coredns-674b8bbfcf-9pgd9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic9fa6ceb9b0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-9pgd9" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-" May 17 00:22:44.508738 containerd[1701]: 2025-05-17 00:22:44.391 [INFO][5049] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-9pgd9" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0" May 17 00:22:44.508738 containerd[1701]: 2025-05-17 00:22:44.425 [INFO][5062] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7" HandleID="k8s-pod-network.9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0" May 17 00:22:44.508738 containerd[1701]: 2025-05-17 00:22:44.425 [INFO][5062] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7" HandleID="k8s-pod-network.9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9030), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-n-4e81e33f0f", "pod":"coredns-674b8bbfcf-9pgd9", "timestamp":"2025-05-17 00:22:44.425012504 +0000 UTC"}, Hostname:"ci-4081.3.3-n-4e81e33f0f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:22:44.508738 containerd[1701]: 2025-05-17 00:22:44.425 [INFO][5062] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:44.508738 containerd[1701]: 2025-05-17 00:22:44.425 [INFO][5062] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:44.508738 containerd[1701]: 2025-05-17 00:22:44.425 [INFO][5062] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-4e81e33f0f' May 17 00:22:44.508738 containerd[1701]: 2025-05-17 00:22:44.432 [INFO][5062] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:44.508738 containerd[1701]: 2025-05-17 00:22:44.437 [INFO][5062] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:44.508738 containerd[1701]: 2025-05-17 00:22:44.442 [INFO][5062] ipam/ipam.go 511: Trying affinity for 192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:44.508738 containerd[1701]: 2025-05-17 00:22:44.444 [INFO][5062] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:44.508738 containerd[1701]: 2025-05-17 00:22:44.448 [INFO][5062] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:44.508738 containerd[1701]: 2025-05-17 00:22:44.449 [INFO][5062] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:44.508738 containerd[1701]: 2025-05-17 00:22:44.451 [INFO][5062] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7 May 17 00:22:44.508738 containerd[1701]: 2025-05-17 00:22:44.456 [INFO][5062] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:44.508738 containerd[1701]: 2025-05-17 00:22:44.466 [INFO][5062] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.29.132/26] block=192.168.29.128/26 handle="k8s-pod-network.9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:44.508738 containerd[1701]: 2025-05-17 00:22:44.466 [INFO][5062] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.132/26] handle="k8s-pod-network.9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:44.508738 containerd[1701]: 2025-05-17 00:22:44.466 [INFO][5062] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:44.508738 containerd[1701]: 2025-05-17 00:22:44.466 [INFO][5062] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.132/26] IPv6=[] ContainerID="9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7" HandleID="k8s-pod-network.9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0" May 17 00:22:44.527247 containerd[1701]: 2025-05-17 00:22:44.468 [INFO][5049] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-9pgd9" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8e6bfc34-e1e7-4777-bb96-96b800f3b1da", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"", Pod:"coredns-674b8bbfcf-9pgd9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9fa6ceb9b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:44.527247 containerd[1701]: 2025-05-17 00:22:44.468 [INFO][5049] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.132/32] ContainerID="9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-9pgd9" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0" May 17 00:22:44.527247 containerd[1701]: 2025-05-17 00:22:44.468 [INFO][5049] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9fa6ceb9b0 ContainerID="9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-9pgd9" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0" May 17 00:22:44.527247 containerd[1701]: 2025-05-17 00:22:44.474 [INFO][5049] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-9pgd9" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0" May 17 00:22:44.527247 containerd[1701]: 2025-05-17 00:22:44.475 [INFO][5049] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-9pgd9" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8e6bfc34-e1e7-4777-bb96-96b800f3b1da", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7", Pod:"coredns-674b8bbfcf-9pgd9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9fa6ceb9b0", MAC:"0a:d4:5f:0f:9b:a9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:44.527247 containerd[1701]: 2025-05-17 00:22:44.501 [INFO][5049] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-9pgd9" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0" May 17 00:22:44.652521 containerd[1701]: time="2025-05-17T00:22:44.651539116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:44.652521 containerd[1701]: time="2025-05-17T00:22:44.651618017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:44.652521 containerd[1701]: time="2025-05-17T00:22:44.651656517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:44.652521 containerd[1701]: time="2025-05-17T00:22:44.651775018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:44.656373 systemd-networkd[1543]: cali7e7ff9966d6: Gained IPv6LL May 17 00:22:44.731722 systemd[1]: Started cri-containerd-9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7.scope - libcontainer container 9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7. May 17 00:22:44.793947 systemd[1]: run-netns-cni\x2d94ab5b71\x2d2954\x2df3b7\x2d2bd2\x2d53c1a0d0a1a4.mount: Deactivated successfully. May 17 00:22:44.794339 systemd[1]: run-netns-cni\x2d65deee49\x2d17b1\x2d691d\x2db72c\x2d7dbcfb578f69.mount: Deactivated successfully. May 17 00:22:44.901356 containerd[1701]: time="2025-05-17T00:22:44.901294434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9pgd9,Uid:8e6bfc34-e1e7-4777-bb96-96b800f3b1da,Namespace:kube-system,Attempt:1,} returns sandbox id \"9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7\"" May 17 00:22:44.919362 containerd[1701]: time="2025-05-17T00:22:44.919281694Z" level=info msg="CreateContainer within sandbox \"9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:22:44.952155 systemd-networkd[1543]: cali8cc4b8e6492: Link UP May 17 00:22:44.957633 systemd-networkd[1543]: cali8cc4b8e6492: Gained carrier May 17 00:22:44.967348 containerd[1701]: time="2025-05-17T00:22:44.967293921Z" level=info msg="StopPodSandbox for \"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\"" May 17 00:22:44.979118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount712034282.mount: Deactivated successfully. May 17 00:22:44.992828 containerd[1701]: time="2025-05-17T00:22:44.992614346Z" level=info msg="CreateContainer within sandbox \"9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"506c34d92ff78d5889b7ce17e68ccca86066861ca4955a314b3a007ce8b4bb3d\"" May 17 00:22:44.997133 containerd[1701]: time="2025-05-17T00:22:44.995168868Z" level=info msg="StartContainer for \"506c34d92ff78d5889b7ce17e68ccca86066861ca4955a314b3a007ce8b4bb3d\"" May 17 00:22:45.004854 containerd[1701]: 2025-05-17 00:22:44.642 [INFO][5076] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0 coredns-674b8bbfcf- kube-system a9086985-91bc-4199-9a6a-aa40312930e7 994 0 2025-05-17 00:22:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-n-4e81e33f0f coredns-674b8bbfcf-6jwzq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8cc4b8e6492 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05" Namespace="kube-system" Pod="coredns-674b8bbfcf-6jwzq" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-" May 17 00:22:45.004854 containerd[1701]: 2025-05-17 00:22:44.643 [INFO][5076] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05" Namespace="kube-system" Pod="coredns-674b8bbfcf-6jwzq" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0" May 17 00:22:45.004854 containerd[1701]: 2025-05-17 00:22:44.820 [INFO][5135] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05" HandleID="k8s-pod-network.e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0" May 17 00:22:45.004854 containerd[1701]: 2025-05-17 00:22:44.820 [INFO][5135] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05" HandleID="k8s-pod-network.e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005e2d00), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-n-4e81e33f0f", "pod":"coredns-674b8bbfcf-6jwzq", "timestamp":"2025-05-17 00:22:44.820436016 +0000 UTC"}, Hostname:"ci-4081.3.3-n-4e81e33f0f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:22:45.004854 containerd[1701]: 2025-05-17 00:22:44.824 [INFO][5135] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:45.004854 containerd[1701]: 2025-05-17 00:22:44.824 [INFO][5135] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:45.004854 containerd[1701]: 2025-05-17 00:22:44.824 [INFO][5135] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-4e81e33f0f' May 17 00:22:45.004854 containerd[1701]: 2025-05-17 00:22:44.842 [INFO][5135] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.004854 containerd[1701]: 2025-05-17 00:22:44.855 [INFO][5135] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.004854 containerd[1701]: 2025-05-17 00:22:44.876 [INFO][5135] ipam/ipam.go 511: Trying affinity for 192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.004854 containerd[1701]: 2025-05-17 00:22:44.881 [INFO][5135] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.004854 containerd[1701]: 2025-05-17 00:22:44.887 [INFO][5135] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.004854 containerd[1701]: 2025-05-17 00:22:44.887 [INFO][5135] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.004854 containerd[1701]: 2025-05-17 00:22:44.891 [INFO][5135] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05 May 17 00:22:45.004854 containerd[1701]: 2025-05-17 00:22:44.908 [INFO][5135] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.004854 containerd[1701]: 2025-05-17 00:22:44.926 [INFO][5135] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.29.133/26] block=192.168.29.128/26 handle="k8s-pod-network.e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.004854 containerd[1701]: 2025-05-17 00:22:44.926 [INFO][5135] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.133/26] handle="k8s-pod-network.e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.004854 containerd[1701]: 2025-05-17 00:22:44.926 [INFO][5135] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:45.004854 containerd[1701]: 2025-05-17 00:22:44.926 [INFO][5135] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.133/26] IPv6=[] ContainerID="e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05" HandleID="k8s-pod-network.e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0" May 17 00:22:45.005914 containerd[1701]: 2025-05-17 00:22:44.934 [INFO][5076] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05" Namespace="kube-system" Pod="coredns-674b8bbfcf-6jwzq" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a9086985-91bc-4199-9a6a-aa40312930e7", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"", Pod:"coredns-674b8bbfcf-6jwzq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8cc4b8e6492", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:45.005914 containerd[1701]: 2025-05-17 00:22:44.935 [INFO][5076] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.133/32] ContainerID="e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05" Namespace="kube-system" Pod="coredns-674b8bbfcf-6jwzq" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0" May 17 00:22:45.005914 containerd[1701]: 2025-05-17 00:22:44.935 [INFO][5076] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8cc4b8e6492 ContainerID="e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05" Namespace="kube-system" Pod="coredns-674b8bbfcf-6jwzq" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0" May 17 00:22:45.005914 containerd[1701]: 2025-05-17 00:22:44.959 [INFO][5076] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05" Namespace="kube-system" Pod="coredns-674b8bbfcf-6jwzq" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0" May 17 00:22:45.005914 containerd[1701]: 2025-05-17 00:22:44.964 [INFO][5076] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05" Namespace="kube-system" Pod="coredns-674b8bbfcf-6jwzq" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a9086985-91bc-4199-9a6a-aa40312930e7", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05", Pod:"coredns-674b8bbfcf-6jwzq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8cc4b8e6492", MAC:"c2:47:3a:1e:f9:43", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:45.005914 containerd[1701]: 2025-05-17 00:22:44.996 [INFO][5076] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05" Namespace="kube-system" Pod="coredns-674b8bbfcf-6jwzq" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0" May 17 00:22:45.124293 systemd-networkd[1543]: cali794ff389f2a: Link UP May 17 00:22:45.131173 systemd-networkd[1543]: cali794ff389f2a: Gained carrier May 17 00:22:45.171185 containerd[1701]: time="2025-05-17T00:22:45.169946421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:45.171185 containerd[1701]: time="2025-05-17T00:22:45.170066722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:45.171185 containerd[1701]: time="2025-05-17T00:22:45.170103922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:45.171479 containerd[1701]: time="2025-05-17T00:22:45.170393325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:45.174512 systemd[1]: Started cri-containerd-506c34d92ff78d5889b7ce17e68ccca86066861ca4955a314b3a007ce8b4bb3d.scope - libcontainer container 506c34d92ff78d5889b7ce17e68ccca86066861ca4955a314b3a007ce8b4bb3d. May 17 00:22:45.198920 containerd[1701]: 2025-05-17 00:22:44.757 [INFO][5113] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0 calico-apiserver-c46c46bf5- calico-apiserver d30fdcd3-ada9-4b49-b2d5-3ba22cf562c8 991 0 2025-05-17 00:22:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c46c46bf5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-n-4e81e33f0f calico-apiserver-c46c46bf5-sgprk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali794ff389f2a [] [] }} ContainerID="8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62" Namespace="calico-apiserver" Pod="calico-apiserver-c46c46bf5-sgprk" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-" May 17 00:22:45.198920 containerd[1701]: 2025-05-17 00:22:44.757 [INFO][5113] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62" Namespace="calico-apiserver" Pod="calico-apiserver-c46c46bf5-sgprk" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0" May 17 00:22:45.198920 containerd[1701]: 2025-05-17 00:22:44.891 [INFO][5162] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62" HandleID="k8s-pod-network.8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0" May 17 00:22:45.198920 containerd[1701]: 2025-05-17 00:22:44.892 [INFO][5162] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62" HandleID="k8s-pod-network.8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000376850), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-n-4e81e33f0f", "pod":"calico-apiserver-c46c46bf5-sgprk", "timestamp":"2025-05-17 00:22:44.891659749 +0000 UTC"}, Hostname:"ci-4081.3.3-n-4e81e33f0f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:22:45.198920 containerd[1701]: 2025-05-17 00:22:44.892 [INFO][5162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:45.198920 containerd[1701]: 2025-05-17 00:22:44.926 [INFO][5162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:45.198920 containerd[1701]: 2025-05-17 00:22:44.927 [INFO][5162] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-4e81e33f0f' May 17 00:22:45.198920 containerd[1701]: 2025-05-17 00:22:44.945 [INFO][5162] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.198920 containerd[1701]: 2025-05-17 00:22:44.978 [INFO][5162] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.198920 containerd[1701]: 2025-05-17 00:22:44.989 [INFO][5162] ipam/ipam.go 511: Trying affinity for 192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.198920 containerd[1701]: 2025-05-17 00:22:45.015 [INFO][5162] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.198920 containerd[1701]: 2025-05-17 00:22:45.042 [INFO][5162] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.198920 containerd[1701]: 2025-05-17 00:22:45.044 [INFO][5162] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.198920 containerd[1701]: 2025-05-17 00:22:45.057 [INFO][5162] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62 May 17 00:22:45.198920 containerd[1701]: 2025-05-17 00:22:45.073 [INFO][5162] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.198920 containerd[1701]: 2025-05-17 00:22:45.092 [INFO][5162] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.29.134/26] block=192.168.29.128/26 handle="k8s-pod-network.8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.198920 containerd[1701]: 2025-05-17 00:22:45.093 [INFO][5162] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.134/26] handle="k8s-pod-network.8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.198920 containerd[1701]: 2025-05-17 00:22:45.093 [INFO][5162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:45.198920 containerd[1701]: 2025-05-17 00:22:45.093 [INFO][5162] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.134/26] IPv6=[] ContainerID="8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62" HandleID="k8s-pod-network.8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0" May 17 00:22:45.199907 containerd[1701]: 2025-05-17 00:22:45.108 [INFO][5113] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62" Namespace="calico-apiserver" Pod="calico-apiserver-c46c46bf5-sgprk" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0", GenerateName:"calico-apiserver-c46c46bf5-", Namespace:"calico-apiserver", SelfLink:"", UID:"d30fdcd3-ada9-4b49-b2d5-3ba22cf562c8", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c46c46bf5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"", Pod:"calico-apiserver-c46c46bf5-sgprk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali794ff389f2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:45.199907 containerd[1701]: 2025-05-17 00:22:45.109 [INFO][5113] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.134/32] ContainerID="8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62" Namespace="calico-apiserver" Pod="calico-apiserver-c46c46bf5-sgprk" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0" May 17 00:22:45.199907 containerd[1701]: 2025-05-17 00:22:45.109 [INFO][5113] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali794ff389f2a ContainerID="8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62" Namespace="calico-apiserver" Pod="calico-apiserver-c46c46bf5-sgprk" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0" May 17 00:22:45.199907 containerd[1701]: 2025-05-17 00:22:45.144 [INFO][5113] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62" Namespace="calico-apiserver" Pod="calico-apiserver-c46c46bf5-sgprk" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0" May 17 00:22:45.199907 containerd[1701]: 2025-05-17 00:22:45.153 [INFO][5113] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62" Namespace="calico-apiserver" Pod="calico-apiserver-c46c46bf5-sgprk" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0", GenerateName:"calico-apiserver-c46c46bf5-", Namespace:"calico-apiserver", SelfLink:"", UID:"d30fdcd3-ada9-4b49-b2d5-3ba22cf562c8", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c46c46bf5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62", Pod:"calico-apiserver-c46c46bf5-sgprk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali794ff389f2a", MAC:"32:d5:b2:b8:32:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:45.199907 containerd[1701]: 2025-05-17 00:22:45.192 [INFO][5113] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62" Namespace="calico-apiserver" Pod="calico-apiserver-c46c46bf5-sgprk" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0" May 17 00:22:45.263451 systemd[1]: Started cri-containerd-e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05.scope - libcontainer container e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05. May 17 00:22:45.309251 containerd[1701]: time="2025-05-17T00:22:45.304871919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:45.309251 containerd[1701]: time="2025-05-17T00:22:45.304955620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:45.309251 containerd[1701]: time="2025-05-17T00:22:45.304978120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:45.309251 containerd[1701]: time="2025-05-17T00:22:45.305241822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:45.361753 systemd-networkd[1543]: cali287c7f898db: Link UP May 17 00:22:45.375605 systemd-networkd[1543]: cali287c7f898db: Gained carrier May 17 00:22:45.407837 systemd[1]: Started cri-containerd-8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62.scope - libcontainer container 8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62. May 17 00:22:45.414224 containerd[1701]: time="2025-05-17T00:22:45.412943579Z" level=info msg="StartContainer for \"506c34d92ff78d5889b7ce17e68ccca86066861ca4955a314b3a007ce8b4bb3d\" returns successfully" May 17 00:22:45.451737 containerd[1701]: time="2025-05-17T00:22:45.451570322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6jwzq,Uid:a9086985-91bc-4199-9a6a-aa40312930e7,Namespace:kube-system,Attempt:1,} returns sandbox id \"e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05\"" May 17 00:22:45.470499 containerd[1701]: time="2025-05-17T00:22:45.470353389Z" level=info msg="CreateContainer within sandbox \"e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:22:45.474364 containerd[1701]: 2025-05-17 00:22:44.755 [INFO][5103] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0 calico-kube-controllers-5d55675d8- calico-system 13021eae-c635-4ef1-8524-560d7fa13a2f 995 0 2025-05-17 00:22:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d55675d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-n-4e81e33f0f calico-kube-controllers-5d55675d8-rdf7k eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali287c7f898db [] [] }} ContainerID="49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c" Namespace="calico-system" Pod="calico-kube-controllers-5d55675d8-rdf7k" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-" May 17 00:22:45.474364 containerd[1701]: 2025-05-17 00:22:44.756 [INFO][5103] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c" Namespace="calico-system" Pod="calico-kube-controllers-5d55675d8-rdf7k" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0" May 17 00:22:45.474364 containerd[1701]: 2025-05-17 00:22:44.922 [INFO][5164] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c" HandleID="k8s-pod-network.49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0" May 17 00:22:45.474364 containerd[1701]: 2025-05-17 00:22:44.923 [INFO][5164] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c" HandleID="k8s-pod-network.49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033be70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-4e81e33f0f", "pod":"calico-kube-controllers-5d55675d8-rdf7k", "timestamp":"2025-05-17 00:22:44.922892726 +0000 UTC"}, Hostname:"ci-4081.3.3-n-4e81e33f0f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:22:45.474364 containerd[1701]: 2025-05-17 00:22:44.923 [INFO][5164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:45.474364 containerd[1701]: 2025-05-17 00:22:45.093 [INFO][5164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:45.474364 containerd[1701]: 2025-05-17 00:22:45.093 [INFO][5164] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-4e81e33f0f' May 17 00:22:45.474364 containerd[1701]: 2025-05-17 00:22:45.104 [INFO][5164] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.474364 containerd[1701]: 2025-05-17 00:22:45.146 [INFO][5164] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.474364 containerd[1701]: 2025-05-17 00:22:45.186 [INFO][5164] ipam/ipam.go 511: Trying affinity for 192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.474364 containerd[1701]: 2025-05-17 00:22:45.203 [INFO][5164] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.474364 containerd[1701]: 2025-05-17 00:22:45.213 [INFO][5164] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.474364 containerd[1701]: 2025-05-17 00:22:45.213 [INFO][5164] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.474364 containerd[1701]: 2025-05-17 00:22:45.236 [INFO][5164] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c May 17 00:22:45.474364 containerd[1701]: 2025-05-17 00:22:45.291 [INFO][5164] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.474364 containerd[1701]: 2025-05-17 00:22:45.315 [INFO][5164] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.29.135/26] block=192.168.29.128/26 handle="k8s-pod-network.49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.474364 containerd[1701]: 2025-05-17 00:22:45.316 [INFO][5164] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.135/26] handle="k8s-pod-network.49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:45.474364 containerd[1701]: 2025-05-17 00:22:45.316 [INFO][5164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:45.474364 containerd[1701]: 2025-05-17 00:22:45.316 [INFO][5164] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.135/26] IPv6=[] ContainerID="49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c" HandleID="k8s-pod-network.49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0" May 17 00:22:45.476711 containerd[1701]: 2025-05-17 00:22:45.337 [INFO][5103] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c" Namespace="calico-system" Pod="calico-kube-controllers-5d55675d8-rdf7k" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0", GenerateName:"calico-kube-controllers-5d55675d8-", Namespace:"calico-system", SelfLink:"", UID:"13021eae-c635-4ef1-8524-560d7fa13a2f", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d55675d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"", Pod:"calico-kube-controllers-5d55675d8-rdf7k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali287c7f898db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:45.476711 containerd[1701]: 2025-05-17 00:22:45.340 [INFO][5103] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.135/32] ContainerID="49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c" Namespace="calico-system" Pod="calico-kube-controllers-5d55675d8-rdf7k" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0" May 17 00:22:45.476711 containerd[1701]: 2025-05-17 00:22:45.340 [INFO][5103] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali287c7f898db ContainerID="49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c" Namespace="calico-system" Pod="calico-kube-controllers-5d55675d8-rdf7k" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0" May 17 00:22:45.476711 containerd[1701]: 2025-05-17 00:22:45.385 [INFO][5103] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c" Namespace="calico-system" Pod="calico-kube-controllers-5d55675d8-rdf7k" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0" May 17 00:22:45.476711 containerd[1701]: 2025-05-17 00:22:45.409 [INFO][5103] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c" Namespace="calico-system" Pod="calico-kube-controllers-5d55675d8-rdf7k" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0", GenerateName:"calico-kube-controllers-5d55675d8-", Namespace:"calico-system", SelfLink:"", UID:"13021eae-c635-4ef1-8524-560d7fa13a2f", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d55675d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c", Pod:"calico-kube-controllers-5d55675d8-rdf7k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali287c7f898db", MAC:"0e:46:e7:5a:ca:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:45.476711 containerd[1701]: 2025-05-17 00:22:45.467 [INFO][5103] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c" Namespace="calico-system" Pod="calico-kube-controllers-5d55675d8-rdf7k" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0" May 17 00:22:45.511632 containerd[1701]: 2025-05-17 00:22:45.237 [INFO][5201] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" May 17 00:22:45.511632 containerd[1701]: 2025-05-17 00:22:45.248 [INFO][5201] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" iface="eth0" netns="/var/run/netns/cni-f6f964d6-9cf4-fe77-bdca-473845ef217c" May 17 00:22:45.511632 containerd[1701]: 2025-05-17 00:22:45.249 [INFO][5201] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" iface="eth0" netns="/var/run/netns/cni-f6f964d6-9cf4-fe77-bdca-473845ef217c" May 17 00:22:45.511632 containerd[1701]: 2025-05-17 00:22:45.256 [INFO][5201] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" iface="eth0" netns="/var/run/netns/cni-f6f964d6-9cf4-fe77-bdca-473845ef217c" May 17 00:22:45.511632 containerd[1701]: 2025-05-17 00:22:45.260 [INFO][5201] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" May 17 00:22:45.511632 containerd[1701]: 2025-05-17 00:22:45.262 [INFO][5201] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" May 17 00:22:45.511632 containerd[1701]: 2025-05-17 00:22:45.470 [INFO][5285] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" HandleID="k8s-pod-network.9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0" May 17 00:22:45.511632 containerd[1701]: 2025-05-17 00:22:45.470 [INFO][5285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:45.511632 containerd[1701]: 2025-05-17 00:22:45.470 [INFO][5285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:45.511632 containerd[1701]: 2025-05-17 00:22:45.494 [WARNING][5285] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" HandleID="k8s-pod-network.9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0" May 17 00:22:45.511632 containerd[1701]: 2025-05-17 00:22:45.495 [INFO][5285] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" HandleID="k8s-pod-network.9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0" May 17 00:22:45.511632 containerd[1701]: 2025-05-17 00:22:45.500 [INFO][5285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:45.511632 containerd[1701]: 2025-05-17 00:22:45.504 [INFO][5201] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" May 17 00:22:45.513370 containerd[1701]: time="2025-05-17T00:22:45.513098869Z" level=info msg="TearDown network for sandbox \"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\" successfully" May 17 00:22:45.513730 containerd[1701]: time="2025-05-17T00:22:45.513696874Z" level=info msg="StopPodSandbox for \"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\" returns successfully" May 17 00:22:45.519480 containerd[1701]: time="2025-05-17T00:22:45.519359824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c46c46bf5-9vfk6,Uid:6b509eee-7c18-42a2-a9a2-d0bc267486dd,Namespace:calico-apiserver,Attempt:1,}" May 17 00:22:45.555118 containerd[1701]: time="2025-05-17T00:22:45.554654438Z" level=info msg="CreateContainer within sandbox \"e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"20cf4f10155142ce418e80c5b80683939914988798b28ef7512c172a821660c6\"" May 17 00:22:45.560662 containerd[1701]: time="2025-05-17T00:22:45.560081586Z" level=info msg="StartContainer for \"20cf4f10155142ce418e80c5b80683939914988798b28ef7512c172a821660c6\"" May 17 00:22:45.619071 containerd[1701]: time="2025-05-17T00:22:45.608703518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:45.619071 containerd[1701]: time="2025-05-17T00:22:45.614753572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:45.619071 containerd[1701]: time="2025-05-17T00:22:45.614773772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:45.619071 containerd[1701]: time="2025-05-17T00:22:45.614907573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:45.705510 systemd[1]: Started cri-containerd-49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c.scope - libcontainer container 49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c. May 17 00:22:45.729736 systemd[1]: Started cri-containerd-20cf4f10155142ce418e80c5b80683939914988798b28ef7512c172a821660c6.scope - libcontainer container 20cf4f10155142ce418e80c5b80683939914988798b28ef7512c172a821660c6. May 17 00:22:45.772295 containerd[1701]: time="2025-05-17T00:22:45.770607856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c46c46bf5-sgprk,Uid:d30fdcd3-ada9-4b49-b2d5-3ba22cf562c8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62\"" May 17 00:22:45.799764 systemd[1]: run-netns-cni\x2df6f964d6\x2d9cf4\x2dfe77\x2dbdca\x2d473845ef217c.mount: Deactivated successfully. May 17 00:22:45.852239 containerd[1701]: time="2025-05-17T00:22:45.850600466Z" level=info msg="StartContainer for \"20cf4f10155142ce418e80c5b80683939914988798b28ef7512c172a821660c6\" returns successfully" May 17 00:22:46.054139 containerd[1701]: time="2025-05-17T00:22:46.054080774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d55675d8-rdf7k,Uid:13021eae-c635-4ef1-8524-560d7fa13a2f,Namespace:calico-system,Attempt:1,} returns sandbox id \"49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c\"" May 17 00:22:46.078602 systemd-networkd[1543]: cali988313f0c5e: Link UP May 17 00:22:46.080498 systemd-networkd[1543]: cali988313f0c5e: Gained carrier May 17 00:22:46.113519 containerd[1701]: 2025-05-17 00:22:45.839 [INFO][5371] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0 calico-apiserver-c46c46bf5- calico-apiserver 6b509eee-7c18-42a2-a9a2-d0bc267486dd 1011 0 2025-05-17 00:22:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c46c46bf5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-n-4e81e33f0f calico-apiserver-c46c46bf5-9vfk6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali988313f0c5e [] [] }} ContainerID="5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1" Namespace="calico-apiserver" Pod="calico-apiserver-c46c46bf5-9vfk6" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-" May 17 00:22:46.113519 containerd[1701]: 2025-05-17 00:22:45.840 [INFO][5371] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1" Namespace="calico-apiserver" Pod="calico-apiserver-c46c46bf5-9vfk6" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0" May 17 00:22:46.113519 containerd[1701]: 2025-05-17 00:22:45.924 [INFO][5440] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1" HandleID="k8s-pod-network.5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0" May 17 00:22:46.113519 containerd[1701]: 2025-05-17 00:22:45.924 [INFO][5440] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1" HandleID="k8s-pod-network.5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003840f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-n-4e81e33f0f", "pod":"calico-apiserver-c46c46bf5-9vfk6", "timestamp":"2025-05-17 00:22:45.922934509 +0000 UTC"}, Hostname:"ci-4081.3.3-n-4e81e33f0f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:22:46.113519 containerd[1701]: 2025-05-17 00:22:45.924 [INFO][5440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:46.113519 containerd[1701]: 2025-05-17 00:22:45.924 [INFO][5440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:46.113519 containerd[1701]: 2025-05-17 00:22:45.924 [INFO][5440] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-4e81e33f0f' May 17 00:22:46.113519 containerd[1701]: 2025-05-17 00:22:45.954 [INFO][5440] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:46.113519 containerd[1701]: 2025-05-17 00:22:45.981 [INFO][5440] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:46.113519 containerd[1701]: 2025-05-17 00:22:45.995 [INFO][5440] ipam/ipam.go 511: Trying affinity for 192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:46.113519 containerd[1701]: 2025-05-17 00:22:45.998 [INFO][5440] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:46.113519 containerd[1701]: 2025-05-17 00:22:46.003 [INFO][5440] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:46.113519 containerd[1701]: 2025-05-17 00:22:46.003 [INFO][5440] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:46.113519 containerd[1701]: 2025-05-17 00:22:46.006 [INFO][5440] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1 May 17 00:22:46.113519 containerd[1701]: 2025-05-17 00:22:46.021 [INFO][5440] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:46.113519 containerd[1701]: 2025-05-17 00:22:46.052 [INFO][5440] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.29.136/26] block=192.168.29.128/26 handle="k8s-pod-network.5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:46.113519 containerd[1701]: 2025-05-17 00:22:46.052 [INFO][5440] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.136/26] handle="k8s-pod-network.5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1" host="ci-4081.3.3-n-4e81e33f0f" May 17 00:22:46.113519 containerd[1701]: 2025-05-17 00:22:46.052 [INFO][5440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:46.113519 containerd[1701]: 2025-05-17 00:22:46.052 [INFO][5440] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.136/26] IPv6=[] ContainerID="5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1" HandleID="k8s-pod-network.5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0" May 17 00:22:46.117935 containerd[1701]: 2025-05-17 00:22:46.062 [INFO][5371] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1" Namespace="calico-apiserver" Pod="calico-apiserver-c46c46bf5-9vfk6" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0", GenerateName:"calico-apiserver-c46c46bf5-", Namespace:"calico-apiserver", SelfLink:"", UID:"6b509eee-7c18-42a2-a9a2-d0bc267486dd", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c46c46bf5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"", Pod:"calico-apiserver-c46c46bf5-9vfk6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali988313f0c5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:46.117935 containerd[1701]: 2025-05-17 00:22:46.065 [INFO][5371] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.136/32] ContainerID="5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1" Namespace="calico-apiserver" Pod="calico-apiserver-c46c46bf5-9vfk6" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0" May 17 00:22:46.117935 containerd[1701]: 2025-05-17 00:22:46.066 [INFO][5371] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali988313f0c5e ContainerID="5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1" Namespace="calico-apiserver" Pod="calico-apiserver-c46c46bf5-9vfk6" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0" May 17 00:22:46.117935 containerd[1701]: 2025-05-17 00:22:46.082 [INFO][5371] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1" Namespace="calico-apiserver" Pod="calico-apiserver-c46c46bf5-9vfk6" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0" May 17 00:22:46.117935 containerd[1701]: 2025-05-17 00:22:46.082 [INFO][5371] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1" Namespace="calico-apiserver" Pod="calico-apiserver-c46c46bf5-9vfk6" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0", GenerateName:"calico-apiserver-c46c46bf5-", Namespace:"calico-apiserver", SelfLink:"", UID:"6b509eee-7c18-42a2-a9a2-d0bc267486dd", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c46c46bf5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1", Pod:"calico-apiserver-c46c46bf5-9vfk6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali988313f0c5e", MAC:"52:2a:20:74:1f:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:46.117935 containerd[1701]: 2025-05-17 00:22:46.106 [INFO][5371] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1" Namespace="calico-apiserver" Pod="calico-apiserver-c46c46bf5-9vfk6" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0" May 17 00:22:46.136385 containerd[1701]: time="2025-05-17T00:22:46.136331304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:46.145813 containerd[1701]: time="2025-05-17T00:22:46.145467786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 17 00:22:46.153236 containerd[1701]: time="2025-05-17T00:22:46.152921952Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:46.162132 containerd[1701]: time="2025-05-17T00:22:46.161167825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:46.162132 containerd[1701]: time="2025-05-17T00:22:46.161418627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:46.162132 containerd[1701]: time="2025-05-17T00:22:46.161463328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:46.162132 containerd[1701]: time="2025-05-17T00:22:46.161591929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:46.182688 containerd[1701]: time="2025-05-17T00:22:46.181073002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:46.186570 containerd[1701]: time="2025-05-17T00:22:46.181940210Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 3.30550536s" May 17 00:22:46.186570 containerd[1701]: time="2025-05-17T00:22:46.185640242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:22:46.189853 containerd[1701]: time="2025-05-17T00:22:46.189805779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:22:46.203219 containerd[1701]: time="2025-05-17T00:22:46.202538592Z" level=info msg="CreateContainer within sandbox \"ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:22:46.221684 systemd[1]: Started cri-containerd-5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1.scope - libcontainer container 5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1. May 17 00:22:46.256357 systemd-networkd[1543]: calic9fa6ceb9b0: Gained IPv6LL May 17 00:22:46.285105 containerd[1701]: time="2025-05-17T00:22:46.285050825Z" level=info msg="CreateContainer within sandbox \"ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8e97c77cc1a3cc5829aeae95ce522245d42c074bb1c7363a040634f723a8d962\"" May 17 00:22:46.290229 containerd[1701]: time="2025-05-17T00:22:46.288293654Z" level=info msg="StartContainer for \"8e97c77cc1a3cc5829aeae95ce522245d42c074bb1c7363a040634f723a8d962\"" May 17 00:22:46.334269 kubelet[3166]: I0517 00:22:46.333735 3166 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9pgd9" podStartSLOduration=46.332792249 podStartE2EDuration="46.332792249s" podCreationTimestamp="2025-05-17 00:22:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:22:46.332266745 +0000 UTC m=+50.492792882" watchObservedRunningTime="2025-05-17 00:22:46.332792249 +0000 UTC m=+50.493318386" May 17 00:22:46.386081 containerd[1701]: time="2025-05-17T00:22:46.386024122Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:46.391569 containerd[1701]: time="2025-05-17T00:22:46.391490171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:46.391733 containerd[1701]: time="2025-05-17T00:22:46.391633772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:22:46.392281 kubelet[3166]: E0517 00:22:46.391827 3166 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:22:46.392281 kubelet[3166]: E0517 00:22:46.391893 3166 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:22:46.392281 kubelet[3166]: E0517 00:22:46.392170 3166 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4l2l6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-z6qms_calico-system(7fb9aa5a-970a-4e82-b193-535f4a3ef021): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:46.394070 containerd[1701]: time="2025-05-17T00:22:46.393953993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:22:46.395417 kubelet[3166]: E0517 00:22:46.395231 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-z6qms" podUID="7fb9aa5a-970a-4e82-b193-535f4a3ef021" May 17 00:22:46.406504 systemd[1]: Started cri-containerd-8e97c77cc1a3cc5829aeae95ce522245d42c074bb1c7363a040634f723a8d962.scope - libcontainer container 8e97c77cc1a3cc5829aeae95ce522245d42c074bb1c7363a040634f723a8d962. May 17 00:22:46.448355 systemd-networkd[1543]: cali8cc4b8e6492: Gained IPv6LL May 17 00:22:46.468912 kubelet[3166]: E0517 00:22:46.468854 3166 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b509eee_7c18_42a2_a9a2_d0bc267486dd.slice/cri-containerd-5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1.scope\": RecentStats: unable to find data in memory cache]" May 17 00:22:46.471361 kubelet[3166]: I0517 00:22:46.466163 3166 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6jwzq" podStartSLOduration=46.466140334 podStartE2EDuration="46.466140334s" podCreationTimestamp="2025-05-17 00:22:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:22:46.424172961 +0000 UTC m=+50.584699098" watchObservedRunningTime="2025-05-17 00:22:46.466140334 +0000 UTC m=+50.626666471" May 17 00:22:46.473894 containerd[1701]: time="2025-05-17T00:22:46.473651501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c46c46bf5-9vfk6,Uid:6b509eee-7c18-42a2-a9a2-d0bc267486dd,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1\"" May 17 00:22:46.542317 containerd[1701]: time="2025-05-17T00:22:46.542265410Z" level=info msg="StartContainer for \"8e97c77cc1a3cc5829aeae95ce522245d42c074bb1c7363a040634f723a8d962\" returns successfully" May 17 00:22:46.576366 systemd-networkd[1543]: cali794ff389f2a: Gained IPv6LL May 17 00:22:46.641728 systemd-networkd[1543]: cali287c7f898db: Gained IPv6LL May 17 00:22:47.064954 kubelet[3166]: I0517 00:22:47.064914 3166 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:22:47.065145 kubelet[3166]: I0517 00:22:47.064971 3166 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:22:47.339067 kubelet[3166]: E0517 00:22:47.338613 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-z6qms" podUID="7fb9aa5a-970a-4e82-b193-535f4a3ef021" May 17 00:22:47.362761 kubelet[3166]: I0517 00:22:47.362157 3166 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-q5tzj" podStartSLOduration=28.514250378 podStartE2EDuration="33.362131492s" podCreationTimestamp="2025-05-17 00:22:14 +0000 UTC" firstStartedPulling="2025-05-17 00:22:41.33910684 +0000 UTC m=+45.499632877" lastFinishedPulling="2025-05-17 00:22:46.186987854 +0000 UTC m=+50.347513991" observedRunningTime="2025-05-17 00:22:47.359927573 +0000 UTC m=+51.520453610" watchObservedRunningTime="2025-05-17 00:22:47.362131492 +0000 UTC m=+51.522657629" May 17 00:22:48.048536 systemd-networkd[1543]: cali988313f0c5e: Gained IPv6LL May 17 00:22:50.215905 containerd[1701]: time="2025-05-17T00:22:50.215839192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:50.225746 containerd[1701]: time="2025-05-17T00:22:50.225467977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 17 00:22:50.230552 containerd[1701]: time="2025-05-17T00:22:50.230458921Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:50.236387 containerd[1701]: time="2025-05-17T00:22:50.236306173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:50.237595 containerd[1701]: time="2025-05-17T00:22:50.237017979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.843005986s" May 17 00:22:50.237595 containerd[1701]: time="2025-05-17T00:22:50.237069180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:22:50.239001 containerd[1701]: time="2025-05-17T00:22:50.238429092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:22:50.246343 containerd[1701]: time="2025-05-17T00:22:50.246055359Z" level=info msg="CreateContainer within sandbox \"8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:22:50.284863 containerd[1701]: time="2025-05-17T00:22:50.284810203Z" level=info msg="CreateContainer within sandbox \"8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c8d0d0329111320f68f4923a94ec1d9650ab9b14506c867895df9199984f27a3\"" May 17 00:22:50.286024 containerd[1701]: time="2025-05-17T00:22:50.285645210Z" level=info msg="StartContainer for \"c8d0d0329111320f68f4923a94ec1d9650ab9b14506c867895df9199984f27a3\"" May 17 00:22:50.329404 systemd[1]: Started cri-containerd-c8d0d0329111320f68f4923a94ec1d9650ab9b14506c867895df9199984f27a3.scope - libcontainer container c8d0d0329111320f68f4923a94ec1d9650ab9b14506c867895df9199984f27a3. May 17 00:22:50.386099 containerd[1701]: time="2025-05-17T00:22:50.385888898Z" level=info msg="StartContainer for \"c8d0d0329111320f68f4923a94ec1d9650ab9b14506c867895df9199984f27a3\" returns successfully" May 17 00:22:52.363659 kubelet[3166]: I0517 00:22:52.363620 3166 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:22:53.360299 update_engine[1666]: I20250517 00:22:53.358297 1666 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 17 00:22:53.360299 update_engine[1666]: I20250517 00:22:53.358356 1666 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 17 00:22:53.360299 update_engine[1666]: I20250517 00:22:53.358533 1666 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 17 00:22:53.360299 update_engine[1666]: I20250517 00:22:53.359025 1666 omaha_request_params.cc:62] Current group set to lts May 17 00:22:53.360299 update_engine[1666]: I20250517 00:22:53.359166 1666 update_attempter.cc:499] Already updated boot flags. Skipping. May 17 00:22:53.360299 update_engine[1666]: I20250517 00:22:53.359179 1666 update_attempter.cc:643] Scheduling an action processor start. May 17 00:22:53.360299 update_engine[1666]: I20250517 00:22:53.359222 1666 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 00:22:53.360299 update_engine[1666]: I20250517 00:22:53.359261 1666 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 17 00:22:53.360299 update_engine[1666]: I20250517 00:22:53.359338 1666 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 00:22:53.360299 update_engine[1666]: I20250517 00:22:53.359349 1666 omaha_request_action.cc:272] Request: May 17 00:22:53.360299 update_engine[1666]: May 17 00:22:53.360299 update_engine[1666]: May 17 00:22:53.360299 update_engine[1666]: May 17 00:22:53.360299 update_engine[1666]: May 17 00:22:53.360299 update_engine[1666]: May 17 00:22:53.360299 update_engine[1666]: May 17 00:22:53.360299 update_engine[1666]: May 17 00:22:53.360299 update_engine[1666]: May 17 00:22:53.360299 update_engine[1666]: I20250517 00:22:53.359359 1666 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:22:53.362022 locksmithd[1715]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 17 00:22:53.362906 update_engine[1666]: I20250517 00:22:53.362865 1666 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:22:53.363693 update_engine[1666]: I20250517 00:22:53.363656 1666 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:22:53.404392 update_engine[1666]: E20250517 00:22:53.404251 1666 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:22:53.404616 update_engine[1666]: I20250517 00:22:53.404466 1666 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 17 00:22:53.988887 kubelet[3166]: I0517 00:22:53.988782 3166 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c46c46bf5-sgprk" podStartSLOduration=38.529509161 podStartE2EDuration="42.988747821s" podCreationTimestamp="2025-05-17 00:22:11 +0000 UTC" firstStartedPulling="2025-05-17 00:22:45.77896083 +0000 UTC m=+49.939486967" lastFinishedPulling="2025-05-17 00:22:50.23819949 +0000 UTC m=+54.398725627" observedRunningTime="2025-05-17 00:22:51.385918859 +0000 UTC m=+55.546444896" watchObservedRunningTime="2025-05-17 00:22:53.988747821 +0000 UTC m=+58.149273858" May 17 00:22:54.253116 containerd[1701]: time="2025-05-17T00:22:54.252956462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:54.255352 containerd[1701]: time="2025-05-17T00:22:54.255280082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 17 00:22:54.259706 containerd[1701]: time="2025-05-17T00:22:54.259642021Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:54.266842 containerd[1701]: time="2025-05-17T00:22:54.266791084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:54.268820 containerd[1701]: time="2025-05-17T00:22:54.268225597Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 4.029755405s" May 17 00:22:54.268820 containerd[1701]: time="2025-05-17T00:22:54.268273097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 00:22:54.270346 containerd[1701]: time="2025-05-17T00:22:54.270313415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:22:54.295799 containerd[1701]: time="2025-05-17T00:22:54.295756341Z" level=info msg="CreateContainer within sandbox \"49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:22:54.334596 containerd[1701]: time="2025-05-17T00:22:54.334538485Z" level=info msg="CreateContainer within sandbox \"49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"755a11aece27ce4f11c88528f245c4a090288bd362110f91d974665cfec1da64\"" May 17 00:22:54.336662 containerd[1701]: time="2025-05-17T00:22:54.335244291Z" level=info msg="StartContainer for \"755a11aece27ce4f11c88528f245c4a090288bd362110f91d974665cfec1da64\"" May 17 00:22:54.377393 systemd[1]: Started cri-containerd-755a11aece27ce4f11c88528f245c4a090288bd362110f91d974665cfec1da64.scope - libcontainer container 755a11aece27ce4f11c88528f245c4a090288bd362110f91d974665cfec1da64. May 17 00:22:54.426010 containerd[1701]: time="2025-05-17T00:22:54.425960595Z" level=info msg="StartContainer for \"755a11aece27ce4f11c88528f245c4a090288bd362110f91d974665cfec1da64\" returns successfully" May 17 00:22:54.595798 containerd[1701]: time="2025-05-17T00:22:54.595655598Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:54.597813 containerd[1701]: time="2025-05-17T00:22:54.597752617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 17 00:22:54.602991 containerd[1701]: time="2025-05-17T00:22:54.602544759Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 331.562038ms" May 17 00:22:54.602991 containerd[1701]: time="2025-05-17T00:22:54.602597160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:22:54.606542 containerd[1701]: time="2025-05-17T00:22:54.606242892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:22:54.614375 containerd[1701]: time="2025-05-17T00:22:54.614328664Z" level=info msg="CreateContainer within sandbox \"5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:22:54.650167 containerd[1701]: time="2025-05-17T00:22:54.650070380Z" level=info msg="CreateContainer within sandbox \"5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c49351909519932608b8b601306ad987a1b1f90e2a119d33b196bb78c94b61d2\"" May 17 00:22:54.651404 containerd[1701]: time="2025-05-17T00:22:54.651258691Z" level=info msg="StartContainer for \"c49351909519932608b8b601306ad987a1b1f90e2a119d33b196bb78c94b61d2\"" May 17 00:22:54.687408 systemd[1]: Started cri-containerd-c49351909519932608b8b601306ad987a1b1f90e2a119d33b196bb78c94b61d2.scope - libcontainer container c49351909519932608b8b601306ad987a1b1f90e2a119d33b196bb78c94b61d2. May 17 00:22:54.734588 containerd[1701]: time="2025-05-17T00:22:54.734388327Z" level=info msg="StartContainer for \"c49351909519932608b8b601306ad987a1b1f90e2a119d33b196bb78c94b61d2\" returns successfully" May 17 00:22:54.791041 containerd[1701]: time="2025-05-17T00:22:54.790984429Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:54.794818 containerd[1701]: time="2025-05-17T00:22:54.794640761Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:54.794818 containerd[1701]: time="2025-05-17T00:22:54.794760962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:22:54.795058 kubelet[3166]: E0517 00:22:54.794961 3166 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:22:54.795058 kubelet[3166]: E0517 00:22:54.795019 3166 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:22:54.795233 kubelet[3166]: E0517 00:22:54.795174 3166 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cc198a9d9a44458f955616eaa1e83f23,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jbk6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6ffb5d99bc-xn6zf_calico-system(5e5288d0-1e2c-4d58-be59-3922b5edffe0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:54.797852 containerd[1701]: time="2025-05-17T00:22:54.797818989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:22:54.964057 containerd[1701]: time="2025-05-17T00:22:54.963892061Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:54.967914 containerd[1701]: time="2025-05-17T00:22:54.967844096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:22:54.968250 containerd[1701]: time="2025-05-17T00:22:54.967824896Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:54.968426 kubelet[3166]: E0517 00:22:54.968380 3166 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:22:54.968553 kubelet[3166]: E0517 00:22:54.968440 3166 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:22:54.968673 kubelet[3166]: E0517 00:22:54.968627 3166 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbk6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6ffb5d99bc-xn6zf_calico-system(5e5288d0-1e2c-4d58-be59-3922b5edffe0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:54.970240 kubelet[3166]: E0517 00:22:54.970179 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6ffb5d99bc-xn6zf" podUID="5e5288d0-1e2c-4d58-be59-3922b5edffe0" May 17 00:22:54.974876 kubelet[3166]: I0517 00:22:54.974836 3166 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:22:55.416377 kubelet[3166]: I0517 00:22:55.416124 3166 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d55675d8-rdf7k" podStartSLOduration=32.204056862 podStartE2EDuration="40.416097867s" podCreationTimestamp="2025-05-17 00:22:15 +0000 UTC" firstStartedPulling="2025-05-17 00:22:46.057712806 +0000 UTC m=+50.218238843" lastFinishedPulling="2025-05-17 00:22:54.269753811 +0000 UTC m=+58.430279848" observedRunningTime="2025-05-17 00:22:55.412595636 +0000 UTC m=+59.573121773" watchObservedRunningTime="2025-05-17 00:22:55.416097867 +0000 UTC m=+59.576624004" May 17 00:22:55.484971 kubelet[3166]: I0517 00:22:55.483788 3166 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c46c46bf5-9vfk6" podStartSLOduration=36.357493432 podStartE2EDuration="44.483560565s" podCreationTimestamp="2025-05-17 00:22:11 +0000 UTC" firstStartedPulling="2025-05-17 00:22:46.478268342 +0000 UTC m=+50.638794379" lastFinishedPulling="2025-05-17 00:22:54.604335475 +0000 UTC m=+58.764861512" observedRunningTime="2025-05-17 00:22:55.441030788 +0000 UTC m=+59.601556925" watchObservedRunningTime="2025-05-17 00:22:55.483560565 +0000 UTC m=+59.644086602" May 17 00:22:55.973265 containerd[1701]: time="2025-05-17T00:22:55.972997502Z" level=info msg="StopPodSandbox for \"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\"" May 17 00:22:56.115033 containerd[1701]: 2025-05-17 00:22:56.033 [WARNING][5734] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0", GenerateName:"calico-apiserver-c46c46bf5-", Namespace:"calico-apiserver", SelfLink:"", UID:"6b509eee-7c18-42a2-a9a2-d0bc267486dd", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c46c46bf5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1", Pod:"calico-apiserver-c46c46bf5-9vfk6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali988313f0c5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:56.115033 containerd[1701]: 2025-05-17 00:22:56.034 [INFO][5734] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" May 17 00:22:56.115033 containerd[1701]: 2025-05-17 00:22:56.034 [INFO][5734] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" iface="eth0" netns="" May 17 00:22:56.115033 containerd[1701]: 2025-05-17 00:22:56.034 [INFO][5734] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" May 17 00:22:56.115033 containerd[1701]: 2025-05-17 00:22:56.034 [INFO][5734] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" May 17 00:22:56.115033 containerd[1701]: 2025-05-17 00:22:56.089 [INFO][5742] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" HandleID="k8s-pod-network.9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0" May 17 00:22:56.115033 containerd[1701]: 2025-05-17 00:22:56.090 [INFO][5742] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:56.115033 containerd[1701]: 2025-05-17 00:22:56.090 [INFO][5742] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:56.115033 containerd[1701]: 2025-05-17 00:22:56.106 [WARNING][5742] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" HandleID="k8s-pod-network.9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0" May 17 00:22:56.115033 containerd[1701]: 2025-05-17 00:22:56.106 [INFO][5742] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" HandleID="k8s-pod-network.9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0" May 17 00:22:56.115033 containerd[1701]: 2025-05-17 00:22:56.110 [INFO][5742] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:56.115033 containerd[1701]: 2025-05-17 00:22:56.112 [INFO][5734] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" May 17 00:22:56.115738 containerd[1701]: time="2025-05-17T00:22:56.115195747Z" level=info msg="TearDown network for sandbox \"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\" successfully" May 17 00:22:56.115738 containerd[1701]: time="2025-05-17T00:22:56.115238848Z" level=info msg="StopPodSandbox for \"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\" returns successfully" May 17 00:22:56.118658 containerd[1701]: time="2025-05-17T00:22:56.118618477Z" level=info msg="RemovePodSandbox for \"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\"" May 17 00:22:56.118794 containerd[1701]: time="2025-05-17T00:22:56.118668777Z" level=info msg="Forcibly stopping sandbox \"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\"" May 17 00:22:56.210153 containerd[1701]: 2025-05-17 00:22:56.178 [WARNING][5758] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0", GenerateName:"calico-apiserver-c46c46bf5-", Namespace:"calico-apiserver", SelfLink:"", UID:"6b509eee-7c18-42a2-a9a2-d0bc267486dd", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c46c46bf5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"5d0a703f1885aa96a8ca2800e49f3d36ca25880e70c8fb2a69744ebb7162f2a1", Pod:"calico-apiserver-c46c46bf5-9vfk6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali988313f0c5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:56.210153 containerd[1701]: 2025-05-17 00:22:56.179 [INFO][5758] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" May 17 00:22:56.210153 containerd[1701]: 2025-05-17 00:22:56.179 [INFO][5758] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" iface="eth0" netns="" May 17 00:22:56.210153 containerd[1701]: 2025-05-17 00:22:56.179 [INFO][5758] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" May 17 00:22:56.210153 containerd[1701]: 2025-05-17 00:22:56.179 [INFO][5758] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" May 17 00:22:56.210153 containerd[1701]: 2025-05-17 00:22:56.200 [INFO][5766] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" HandleID="k8s-pod-network.9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0" May 17 00:22:56.210153 containerd[1701]: 2025-05-17 00:22:56.200 [INFO][5766] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:56.210153 containerd[1701]: 2025-05-17 00:22:56.200 [INFO][5766] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:56.210153 containerd[1701]: 2025-05-17 00:22:56.206 [WARNING][5766] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" HandleID="k8s-pod-network.9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0" May 17 00:22:56.210153 containerd[1701]: 2025-05-17 00:22:56.206 [INFO][5766] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" HandleID="k8s-pod-network.9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--9vfk6-eth0" May 17 00:22:56.210153 containerd[1701]: 2025-05-17 00:22:56.207 [INFO][5766] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:56.210153 containerd[1701]: 2025-05-17 00:22:56.208 [INFO][5758] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0" May 17 00:22:56.210153 containerd[1701]: time="2025-05-17T00:22:56.210063472Z" level=info msg="TearDown network for sandbox \"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\" successfully" May 17 00:22:56.218957 containerd[1701]: time="2025-05-17T00:22:56.218910749Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:56.219123 containerd[1701]: time="2025-05-17T00:22:56.218999750Z" level=info msg="RemovePodSandbox \"9f665461fd48177d2323bf0390a6d95bed36aa2f93271f088788bd3accd62cd0\" returns successfully" May 17 00:22:56.219648 containerd[1701]: time="2025-05-17T00:22:56.219620556Z" level=info msg="StopPodSandbox for \"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\"" May 17 00:22:56.289937 containerd[1701]: 2025-05-17 00:22:56.253 [WARNING][5781] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8e6bfc34-e1e7-4777-bb96-96b800f3b1da", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7", Pod:"coredns-674b8bbfcf-9pgd9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9fa6ceb9b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:56.289937 containerd[1701]: 2025-05-17 00:22:56.253 [INFO][5781] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" May 17 00:22:56.289937 containerd[1701]: 2025-05-17 00:22:56.253 [INFO][5781] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" iface="eth0" netns="" May 17 00:22:56.289937 containerd[1701]: 2025-05-17 00:22:56.253 [INFO][5781] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" May 17 00:22:56.289937 containerd[1701]: 2025-05-17 00:22:56.253 [INFO][5781] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" May 17 00:22:56.289937 containerd[1701]: 2025-05-17 00:22:56.279 [INFO][5788] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" HandleID="k8s-pod-network.a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0" May 17 00:22:56.289937 containerd[1701]: 2025-05-17 00:22:56.279 [INFO][5788] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:56.289937 containerd[1701]: 2025-05-17 00:22:56.280 [INFO][5788] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:56.289937 containerd[1701]: 2025-05-17 00:22:56.286 [WARNING][5788] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" HandleID="k8s-pod-network.a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0" May 17 00:22:56.289937 containerd[1701]: 2025-05-17 00:22:56.286 [INFO][5788] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" HandleID="k8s-pod-network.a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0" May 17 00:22:56.289937 containerd[1701]: 2025-05-17 00:22:56.287 [INFO][5788] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:56.289937 containerd[1701]: 2025-05-17 00:22:56.288 [INFO][5781] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" May 17 00:22:56.289937 containerd[1701]: time="2025-05-17T00:22:56.289698265Z" level=info msg="TearDown network for sandbox \"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\" successfully" May 17 00:22:56.289937 containerd[1701]: time="2025-05-17T00:22:56.289731665Z" level=info msg="StopPodSandbox for \"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\" returns successfully" May 17 00:22:56.292044 containerd[1701]: time="2025-05-17T00:22:56.291370480Z" level=info msg="RemovePodSandbox for \"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\"" May 17 00:22:56.292044 containerd[1701]: time="2025-05-17T00:22:56.291410080Z" level=info msg="Forcibly stopping sandbox \"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\"" May 17 00:22:56.356903 containerd[1701]: 2025-05-17 00:22:56.326 [WARNING][5804] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8e6bfc34-e1e7-4777-bb96-96b800f3b1da", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"9ff7cd660d84c9d1cce18387a1eb01026bd5ef4f1196ca0f4018b8dd3471ceb7", Pod:"coredns-674b8bbfcf-9pgd9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9fa6ceb9b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:56.356903 containerd[1701]: 2025-05-17 00:22:56.326 [INFO][5804] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" May 17 00:22:56.356903 containerd[1701]: 2025-05-17 00:22:56.326 [INFO][5804] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" iface="eth0" netns="" May 17 00:22:56.356903 containerd[1701]: 2025-05-17 00:22:56.326 [INFO][5804] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" May 17 00:22:56.356903 containerd[1701]: 2025-05-17 00:22:56.326 [INFO][5804] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" May 17 00:22:56.356903 containerd[1701]: 2025-05-17 00:22:56.347 [INFO][5811] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" HandleID="k8s-pod-network.a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0" May 17 00:22:56.356903 containerd[1701]: 2025-05-17 00:22:56.347 [INFO][5811] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:56.356903 containerd[1701]: 2025-05-17 00:22:56.347 [INFO][5811] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:56.356903 containerd[1701]: 2025-05-17 00:22:56.353 [WARNING][5811] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" HandleID="k8s-pod-network.a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0" May 17 00:22:56.356903 containerd[1701]: 2025-05-17 00:22:56.353 [INFO][5811] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" HandleID="k8s-pod-network.a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--9pgd9-eth0" May 17 00:22:56.356903 containerd[1701]: 2025-05-17 00:22:56.354 [INFO][5811] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:56.356903 containerd[1701]: 2025-05-17 00:22:56.355 [INFO][5804] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05" May 17 00:22:56.357725 containerd[1701]: time="2025-05-17T00:22:56.356957050Z" level=info msg="TearDown network for sandbox \"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\" successfully" May 17 00:22:56.367855 containerd[1701]: time="2025-05-17T00:22:56.367797544Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:56.367990 containerd[1701]: time="2025-05-17T00:22:56.367890245Z" level=info msg="RemovePodSandbox \"a8b5edfa5348f7849f8d038bddfb258922c133ae4e5e967a1e84153ad26bfe05\" returns successfully" May 17 00:22:56.368614 containerd[1701]: time="2025-05-17T00:22:56.368571951Z" level=info msg="StopPodSandbox for \"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\"" May 17 00:22:56.443602 containerd[1701]: 2025-05-17 00:22:56.409 [WARNING][5825] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0", GenerateName:"calico-kube-controllers-5d55675d8-", Namespace:"calico-system", SelfLink:"", UID:"13021eae-c635-4ef1-8524-560d7fa13a2f", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d55675d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c", Pod:"calico-kube-controllers-5d55675d8-rdf7k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali287c7f898db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:56.443602 containerd[1701]: 2025-05-17 00:22:56.409 [INFO][5825] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" May 17 00:22:56.443602 containerd[1701]: 2025-05-17 00:22:56.409 [INFO][5825] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" iface="eth0" netns="" May 17 00:22:56.443602 containerd[1701]: 2025-05-17 00:22:56.409 [INFO][5825] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" May 17 00:22:56.443602 containerd[1701]: 2025-05-17 00:22:56.409 [INFO][5825] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" May 17 00:22:56.443602 containerd[1701]: 2025-05-17 00:22:56.433 [INFO][5832] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" HandleID="k8s-pod-network.9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0" May 17 00:22:56.443602 containerd[1701]: 2025-05-17 00:22:56.434 [INFO][5832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:56.443602 containerd[1701]: 2025-05-17 00:22:56.434 [INFO][5832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:56.443602 containerd[1701]: 2025-05-17 00:22:56.439 [WARNING][5832] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" HandleID="k8s-pod-network.9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0" May 17 00:22:56.443602 containerd[1701]: 2025-05-17 00:22:56.439 [INFO][5832] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" HandleID="k8s-pod-network.9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0" May 17 00:22:56.443602 containerd[1701]: 2025-05-17 00:22:56.441 [INFO][5832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:56.443602 containerd[1701]: 2025-05-17 00:22:56.442 [INFO][5825] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" May 17 00:22:56.444806 containerd[1701]: time="2025-05-17T00:22:56.443667504Z" level=info msg="TearDown network for sandbox \"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\" successfully" May 17 00:22:56.444806 containerd[1701]: time="2025-05-17T00:22:56.443698605Z" level=info msg="StopPodSandbox for \"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\" returns successfully" May 17 00:22:56.444806 containerd[1701]: time="2025-05-17T00:22:56.444239609Z" level=info msg="RemovePodSandbox for \"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\"" May 17 00:22:56.444806 containerd[1701]: time="2025-05-17T00:22:56.444282310Z" level=info msg="Forcibly stopping sandbox \"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\"" May 17 00:22:56.509493 containerd[1701]: 2025-05-17 00:22:56.477 [WARNING][5846] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0", GenerateName:"calico-kube-controllers-5d55675d8-", Namespace:"calico-system", SelfLink:"", UID:"13021eae-c635-4ef1-8524-560d7fa13a2f", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d55675d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"49959be83b0d3aea6de93ecbf411d8648fc2ff8f811ef62f29ef5cf2fe5a108c", Pod:"calico-kube-controllers-5d55675d8-rdf7k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali287c7f898db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:56.509493 containerd[1701]: 2025-05-17 00:22:56.477 [INFO][5846] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" May 17 00:22:56.509493 containerd[1701]: 2025-05-17 00:22:56.477 [INFO][5846] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" iface="eth0" netns="" May 17 00:22:56.509493 containerd[1701]: 2025-05-17 00:22:56.477 [INFO][5846] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" May 17 00:22:56.509493 containerd[1701]: 2025-05-17 00:22:56.477 [INFO][5846] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" May 17 00:22:56.509493 containerd[1701]: 2025-05-17 00:22:56.499 [INFO][5853] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" HandleID="k8s-pod-network.9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0" May 17 00:22:56.509493 containerd[1701]: 2025-05-17 00:22:56.500 [INFO][5853] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:56.509493 containerd[1701]: 2025-05-17 00:22:56.500 [INFO][5853] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:56.509493 containerd[1701]: 2025-05-17 00:22:56.506 [WARNING][5853] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" HandleID="k8s-pod-network.9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0" May 17 00:22:56.509493 containerd[1701]: 2025-05-17 00:22:56.506 [INFO][5853] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" HandleID="k8s-pod-network.9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--kube--controllers--5d55675d8--rdf7k-eth0" May 17 00:22:56.509493 containerd[1701]: 2025-05-17 00:22:56.507 [INFO][5853] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:56.509493 containerd[1701]: 2025-05-17 00:22:56.508 [INFO][5846] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820" May 17 00:22:56.510231 containerd[1701]: time="2025-05-17T00:22:56.509542277Z" level=info msg="TearDown network for sandbox \"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\" successfully" May 17 00:22:56.517790 containerd[1701]: time="2025-05-17T00:22:56.517725348Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:56.517968 containerd[1701]: time="2025-05-17T00:22:56.517829549Z" level=info msg="RemovePodSandbox \"9f85ab0eb87c2d07ea58c85831d122f22973a3bc823f05e8de2269e12f22c820\" returns successfully" May 17 00:22:56.518479 containerd[1701]: time="2025-05-17T00:22:56.518443455Z" level=info msg="StopPodSandbox for \"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\"" May 17 00:22:56.607880 containerd[1701]: 2025-05-17 00:22:56.574 [WARNING][5867] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"062cbe61-e0c7-468e-8be4-9dc29bebfa6f", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10", Pod:"csi-node-driver-q5tzj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic27cfc188c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:56.607880 containerd[1701]: 2025-05-17 00:22:56.574 [INFO][5867] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" May 17 00:22:56.607880 containerd[1701]: 2025-05-17 00:22:56.574 [INFO][5867] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" iface="eth0" netns="" May 17 00:22:56.607880 containerd[1701]: 2025-05-17 00:22:56.574 [INFO][5867] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" May 17 00:22:56.607880 containerd[1701]: 2025-05-17 00:22:56.574 [INFO][5867] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" May 17 00:22:56.607880 containerd[1701]: 2025-05-17 00:22:56.595 [INFO][5876] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" HandleID="k8s-pod-network.acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0" May 17 00:22:56.607880 containerd[1701]: 2025-05-17 00:22:56.595 [INFO][5876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:56.607880 containerd[1701]: 2025-05-17 00:22:56.595 [INFO][5876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:56.607880 containerd[1701]: 2025-05-17 00:22:56.603 [WARNING][5876] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" HandleID="k8s-pod-network.acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0" May 17 00:22:56.607880 containerd[1701]: 2025-05-17 00:22:56.603 [INFO][5876] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" HandleID="k8s-pod-network.acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0" May 17 00:22:56.607880 containerd[1701]: 2025-05-17 00:22:56.605 [INFO][5876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:56.607880 containerd[1701]: 2025-05-17 00:22:56.606 [INFO][5867] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" May 17 00:22:56.607880 containerd[1701]: time="2025-05-17T00:22:56.607839932Z" level=info msg="TearDown network for sandbox \"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\" successfully" May 17 00:22:56.607880 containerd[1701]: time="2025-05-17T00:22:56.607870632Z" level=info msg="StopPodSandbox for \"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\" returns successfully" May 17 00:22:56.609676 containerd[1701]: time="2025-05-17T00:22:56.609646848Z" level=info msg="RemovePodSandbox for \"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\"" May 17 00:22:56.609801 containerd[1701]: time="2025-05-17T00:22:56.609681648Z" level=info msg="Forcibly stopping sandbox \"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\"" May 17 00:22:56.675398 containerd[1701]: 2025-05-17 00:22:56.644 [WARNING][5890] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"062cbe61-e0c7-468e-8be4-9dc29bebfa6f", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"ed9aaba053148132107b1a76fe07570de7571a34b47de59f105efdcc9624ac10", Pod:"csi-node-driver-q5tzj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic27cfc188c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:56.675398 containerd[1701]: 2025-05-17 00:22:56.645 [INFO][5890] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" May 17 00:22:56.675398 containerd[1701]: 2025-05-17 00:22:56.645 [INFO][5890] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" iface="eth0" netns="" May 17 00:22:56.675398 containerd[1701]: 2025-05-17 00:22:56.645 [INFO][5890] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" May 17 00:22:56.675398 containerd[1701]: 2025-05-17 00:22:56.645 [INFO][5890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" May 17 00:22:56.675398 containerd[1701]: 2025-05-17 00:22:56.665 [INFO][5897] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" HandleID="k8s-pod-network.acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0" May 17 00:22:56.675398 containerd[1701]: 2025-05-17 00:22:56.665 [INFO][5897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:56.675398 containerd[1701]: 2025-05-17 00:22:56.665 [INFO][5897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:56.675398 containerd[1701]: 2025-05-17 00:22:56.671 [WARNING][5897] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" HandleID="k8s-pod-network.acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0" May 17 00:22:56.675398 containerd[1701]: 2025-05-17 00:22:56.671 [INFO][5897] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" HandleID="k8s-pod-network.acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-csi--node--driver--q5tzj-eth0" May 17 00:22:56.675398 containerd[1701]: 2025-05-17 00:22:56.673 [INFO][5897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:56.675398 containerd[1701]: 2025-05-17 00:22:56.674 [INFO][5890] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6" May 17 00:22:56.676057 containerd[1701]: time="2025-05-17T00:22:56.675446520Z" level=info msg="TearDown network for sandbox \"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\" successfully" May 17 00:22:56.686455 containerd[1701]: time="2025-05-17T00:22:56.686387815Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:56.686455 containerd[1701]: time="2025-05-17T00:22:56.686488216Z" level=info msg="RemovePodSandbox \"acdbf4504f609e37ced5df3a42679a6eeb107d71a1636755b3a0b2c30c0aadd6\" returns successfully" May 17 00:22:56.687156 containerd[1701]: time="2025-05-17T00:22:56.687127422Z" level=info msg="StopPodSandbox for \"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\"" May 17 00:22:56.752797 containerd[1701]: 2025-05-17 00:22:56.720 [WARNING][5911] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a9086985-91bc-4199-9a6a-aa40312930e7", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05", Pod:"coredns-674b8bbfcf-6jwzq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8cc4b8e6492", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:56.752797 containerd[1701]: 2025-05-17 00:22:56.720 [INFO][5911] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" May 17 00:22:56.752797 containerd[1701]: 2025-05-17 00:22:56.720 [INFO][5911] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" iface="eth0" netns="" May 17 00:22:56.752797 containerd[1701]: 2025-05-17 00:22:56.720 [INFO][5911] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" May 17 00:22:56.752797 containerd[1701]: 2025-05-17 00:22:56.720 [INFO][5911] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" May 17 00:22:56.752797 containerd[1701]: 2025-05-17 00:22:56.741 [INFO][5918] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" HandleID="k8s-pod-network.cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0" May 17 00:22:56.752797 containerd[1701]: 2025-05-17 00:22:56.741 [INFO][5918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:56.752797 containerd[1701]: 2025-05-17 00:22:56.741 [INFO][5918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:56.752797 containerd[1701]: 2025-05-17 00:22:56.748 [WARNING][5918] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" HandleID="k8s-pod-network.cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0" May 17 00:22:56.752797 containerd[1701]: 2025-05-17 00:22:56.748 [INFO][5918] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" HandleID="k8s-pod-network.cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0" May 17 00:22:56.752797 containerd[1701]: 2025-05-17 00:22:56.750 [INFO][5918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:56.752797 containerd[1701]: 2025-05-17 00:22:56.751 [INFO][5911] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" May 17 00:22:56.753491 containerd[1701]: time="2025-05-17T00:22:56.752839993Z" level=info msg="TearDown network for sandbox \"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\" successfully" May 17 00:22:56.753491 containerd[1701]: time="2025-05-17T00:22:56.752868794Z" level=info msg="StopPodSandbox for \"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\" returns successfully" May 17 00:22:56.753491 containerd[1701]: time="2025-05-17T00:22:56.753469499Z" level=info msg="RemovePodSandbox for \"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\"" May 17 00:22:56.753615 containerd[1701]: time="2025-05-17T00:22:56.753504899Z" level=info msg="Forcibly stopping sandbox \"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\"" May 17 00:22:56.820856 containerd[1701]: 2025-05-17 00:22:56.787 [WARNING][5932] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a9086985-91bc-4199-9a6a-aa40312930e7", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"e6566c37fdf3dd6f2448606738860afc044ae91df3cba4b13ebfccd700b38f05", Pod:"coredns-674b8bbfcf-6jwzq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8cc4b8e6492", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:56.820856 containerd[1701]: 2025-05-17 00:22:56.788 [INFO][5932] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" May 17 00:22:56.820856 containerd[1701]: 2025-05-17 00:22:56.788 [INFO][5932] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" iface="eth0" netns="" May 17 00:22:56.820856 containerd[1701]: 2025-05-17 00:22:56.788 [INFO][5932] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" May 17 00:22:56.820856 containerd[1701]: 2025-05-17 00:22:56.788 [INFO][5932] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" May 17 00:22:56.820856 containerd[1701]: 2025-05-17 00:22:56.810 [INFO][5939] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" HandleID="k8s-pod-network.cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0" May 17 00:22:56.820856 containerd[1701]: 2025-05-17 00:22:56.810 [INFO][5939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:56.820856 containerd[1701]: 2025-05-17 00:22:56.810 [INFO][5939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:56.820856 containerd[1701]: 2025-05-17 00:22:56.816 [WARNING][5939] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" HandleID="k8s-pod-network.cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0" May 17 00:22:56.820856 containerd[1701]: 2025-05-17 00:22:56.817 [INFO][5939] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" HandleID="k8s-pod-network.cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-coredns--674b8bbfcf--6jwzq-eth0" May 17 00:22:56.820856 containerd[1701]: 2025-05-17 00:22:56.818 [INFO][5939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:56.820856 containerd[1701]: 2025-05-17 00:22:56.819 [INFO][5932] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa" May 17 00:22:56.821778 containerd[1701]: time="2025-05-17T00:22:56.820904185Z" level=info msg="TearDown network for sandbox \"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\" successfully" May 17 00:22:56.828925 containerd[1701]: time="2025-05-17T00:22:56.828863255Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:56.829066 containerd[1701]: time="2025-05-17T00:22:56.828941755Z" level=info msg="RemovePodSandbox \"cd081ce210cc6e8e7e7955dbdc51042d8a43a6fcf21ca46ed88f1477dbfed6fa\" returns successfully" May 17 00:22:56.829512 containerd[1701]: time="2025-05-17T00:22:56.829481360Z" level=info msg="StopPodSandbox for \"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\"" May 17 00:22:56.897597 containerd[1701]: 2025-05-17 00:22:56.862 [WARNING][5953] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"7fb9aa5a-970a-4e82-b193-535f4a3ef021", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3", Pod:"goldmane-78d55f7ddc-z6qms", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7e7ff9966d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:56.897597 containerd[1701]: 2025-05-17 00:22:56.863 [INFO][5953] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" May 17 00:22:56.897597 containerd[1701]: 2025-05-17 00:22:56.863 [INFO][5953] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" iface="eth0" netns="" May 17 00:22:56.897597 containerd[1701]: 2025-05-17 00:22:56.863 [INFO][5953] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" May 17 00:22:56.897597 containerd[1701]: 2025-05-17 00:22:56.863 [INFO][5953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" May 17 00:22:56.897597 containerd[1701]: 2025-05-17 00:22:56.887 [INFO][5960] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" HandleID="k8s-pod-network.e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0" May 17 00:22:56.897597 containerd[1701]: 2025-05-17 00:22:56.887 [INFO][5960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:56.897597 containerd[1701]: 2025-05-17 00:22:56.887 [INFO][5960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:56.897597 containerd[1701]: 2025-05-17 00:22:56.893 [WARNING][5960] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" HandleID="k8s-pod-network.e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0" May 17 00:22:56.897597 containerd[1701]: 2025-05-17 00:22:56.893 [INFO][5960] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" HandleID="k8s-pod-network.e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0" May 17 00:22:56.897597 containerd[1701]: 2025-05-17 00:22:56.895 [INFO][5960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:56.897597 containerd[1701]: 2025-05-17 00:22:56.896 [INFO][5953] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" May 17 00:22:56.897597 containerd[1701]: time="2025-05-17T00:22:56.897578152Z" level=info msg="TearDown network for sandbox \"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\" successfully" May 17 00:22:56.898283 containerd[1701]: time="2025-05-17T00:22:56.897612053Z" level=info msg="StopPodSandbox for \"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\" returns successfully" May 17 00:22:56.898336 containerd[1701]: time="2025-05-17T00:22:56.898293859Z" level=info msg="RemovePodSandbox for \"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\"" May 17 00:22:56.898336 containerd[1701]: time="2025-05-17T00:22:56.898328959Z" level=info msg="Forcibly stopping sandbox \"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\"" May 17 00:22:56.969661 containerd[1701]: 2025-05-17 00:22:56.936 [WARNING][5974] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"7fb9aa5a-970a-4e82-b193-535f4a3ef021", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"40e73691556c1c44c26bcc7c2826eea17daa8c371530259fba231ff0675bafa3", Pod:"goldmane-78d55f7ddc-z6qms", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7e7ff9966d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:56.969661 containerd[1701]: 2025-05-17 00:22:56.936 [INFO][5974] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" May 17 00:22:56.969661 containerd[1701]: 2025-05-17 00:22:56.936 [INFO][5974] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" iface="eth0" netns="" May 17 00:22:56.969661 containerd[1701]: 2025-05-17 00:22:56.936 [INFO][5974] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" May 17 00:22:56.969661 containerd[1701]: 2025-05-17 00:22:56.936 [INFO][5974] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" May 17 00:22:56.969661 containerd[1701]: 2025-05-17 00:22:56.957 [INFO][5981] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" HandleID="k8s-pod-network.e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0" May 17 00:22:56.969661 containerd[1701]: 2025-05-17 00:22:56.957 [INFO][5981] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:56.969661 containerd[1701]: 2025-05-17 00:22:56.957 [INFO][5981] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:56.969661 containerd[1701]: 2025-05-17 00:22:56.965 [WARNING][5981] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" HandleID="k8s-pod-network.e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0" May 17 00:22:56.969661 containerd[1701]: 2025-05-17 00:22:56.965 [INFO][5981] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" HandleID="k8s-pod-network.e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-goldmane--78d55f7ddc--z6qms-eth0" May 17 00:22:56.969661 containerd[1701]: 2025-05-17 00:22:56.967 [INFO][5981] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:56.969661 containerd[1701]: 2025-05-17 00:22:56.968 [INFO][5974] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39" May 17 00:22:56.970346 containerd[1701]: time="2025-05-17T00:22:56.969722180Z" level=info msg="TearDown network for sandbox \"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\" successfully" May 17 00:22:56.980594 containerd[1701]: time="2025-05-17T00:22:56.980523774Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:56.981011 containerd[1701]: time="2025-05-17T00:22:56.980607274Z" level=info msg="RemovePodSandbox \"e9811d50f65918f544c1c2d9915585dfd4b08bc8cff7fea679cfb8df47a6dd39\" returns successfully" May 17 00:22:56.981362 containerd[1701]: time="2025-05-17T00:22:56.981333581Z" level=info msg="StopPodSandbox for \"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\"" May 17 00:22:57.053053 containerd[1701]: 2025-05-17 00:22:57.021 [WARNING][5995] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-whisker--cfb5fd58c--rrrg6-eth0" May 17 00:22:57.053053 containerd[1701]: 2025-05-17 00:22:57.021 [INFO][5995] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" May 17 00:22:57.053053 containerd[1701]: 2025-05-17 00:22:57.021 [INFO][5995] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" iface="eth0" netns="" May 17 00:22:57.053053 containerd[1701]: 2025-05-17 00:22:57.021 [INFO][5995] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" May 17 00:22:57.053053 containerd[1701]: 2025-05-17 00:22:57.021 [INFO][5995] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" May 17 00:22:57.053053 containerd[1701]: 2025-05-17 00:22:57.041 [INFO][6003] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" HandleID="k8s-pod-network.447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-whisker--cfb5fd58c--rrrg6-eth0" May 17 00:22:57.053053 containerd[1701]: 2025-05-17 00:22:57.042 [INFO][6003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:57.053053 containerd[1701]: 2025-05-17 00:22:57.042 [INFO][6003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:57.053053 containerd[1701]: 2025-05-17 00:22:57.049 [WARNING][6003] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" HandleID="k8s-pod-network.447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-whisker--cfb5fd58c--rrrg6-eth0" May 17 00:22:57.053053 containerd[1701]: 2025-05-17 00:22:57.049 [INFO][6003] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" HandleID="k8s-pod-network.447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-whisker--cfb5fd58c--rrrg6-eth0" May 17 00:22:57.053053 containerd[1701]: 2025-05-17 00:22:57.050 [INFO][6003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:57.053053 containerd[1701]: 2025-05-17 00:22:57.051 [INFO][5995] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" May 17 00:22:57.053053 containerd[1701]: time="2025-05-17T00:22:57.052907003Z" level=info msg="TearDown network for sandbox \"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\" successfully" May 17 00:22:57.053053 containerd[1701]: time="2025-05-17T00:22:57.052939704Z" level=info msg="StopPodSandbox for \"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\" returns successfully" May 17 00:22:57.053903 containerd[1701]: time="2025-05-17T00:22:57.053871812Z" level=info msg="RemovePodSandbox for \"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\"" May 17 00:22:57.053979 containerd[1701]: time="2025-05-17T00:22:57.053914412Z" level=info msg="Forcibly stopping sandbox \"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\"" May 17 00:22:57.119092 containerd[1701]: 2025-05-17 00:22:57.088 [WARNING][6018] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" WorkloadEndpoint="ci--4081.3.3--n--4e81e33f0f-k8s-whisker--cfb5fd58c--rrrg6-eth0" May 17 00:22:57.119092 containerd[1701]: 2025-05-17 00:22:57.088 [INFO][6018] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" May 17 00:22:57.119092 containerd[1701]: 2025-05-17 00:22:57.088 [INFO][6018] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" iface="eth0" netns="" May 17 00:22:57.119092 containerd[1701]: 2025-05-17 00:22:57.088 [INFO][6018] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" May 17 00:22:57.119092 containerd[1701]: 2025-05-17 00:22:57.088 [INFO][6018] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" May 17 00:22:57.119092 containerd[1701]: 2025-05-17 00:22:57.107 [INFO][6025] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" HandleID="k8s-pod-network.447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-whisker--cfb5fd58c--rrrg6-eth0" May 17 00:22:57.119092 containerd[1701]: 2025-05-17 00:22:57.107 [INFO][6025] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:57.119092 containerd[1701]: 2025-05-17 00:22:57.107 [INFO][6025] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:57.119092 containerd[1701]: 2025-05-17 00:22:57.114 [WARNING][6025] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" HandleID="k8s-pod-network.447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-whisker--cfb5fd58c--rrrg6-eth0" May 17 00:22:57.119092 containerd[1701]: 2025-05-17 00:22:57.114 [INFO][6025] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" HandleID="k8s-pod-network.447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-whisker--cfb5fd58c--rrrg6-eth0" May 17 00:22:57.119092 containerd[1701]: 2025-05-17 00:22:57.117 [INFO][6025] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:57.119092 containerd[1701]: 2025-05-17 00:22:57.117 [INFO][6018] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b" May 17 00:22:57.119092 containerd[1701]: time="2025-05-17T00:22:57.119103079Z" level=info msg="TearDown network for sandbox \"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\" successfully" May 17 00:22:57.130650 containerd[1701]: time="2025-05-17T00:22:57.130579979Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:57.130990 containerd[1701]: time="2025-05-17T00:22:57.130660280Z" level=info msg="RemovePodSandbox \"447cc06801c456b8ccd13c442a4a2e9b0a440f20f48acf6dba20dae6d833440b\" returns successfully" May 17 00:22:57.131259 containerd[1701]: time="2025-05-17T00:22:57.131215884Z" level=info msg="StopPodSandbox for \"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\"" May 17 00:22:57.200443 containerd[1701]: 2025-05-17 00:22:57.165 [WARNING][6039] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0", GenerateName:"calico-apiserver-c46c46bf5-", Namespace:"calico-apiserver", SelfLink:"", UID:"d30fdcd3-ada9-4b49-b2d5-3ba22cf562c8", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c46c46bf5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62", Pod:"calico-apiserver-c46c46bf5-sgprk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali794ff389f2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:57.200443 containerd[1701]: 2025-05-17 00:22:57.165 [INFO][6039] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" May 17 00:22:57.200443 containerd[1701]: 2025-05-17 00:22:57.165 [INFO][6039] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" iface="eth0" netns="" May 17 00:22:57.200443 containerd[1701]: 2025-05-17 00:22:57.165 [INFO][6039] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" May 17 00:22:57.200443 containerd[1701]: 2025-05-17 00:22:57.165 [INFO][6039] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" May 17 00:22:57.200443 containerd[1701]: 2025-05-17 00:22:57.188 [INFO][6046] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" HandleID="k8s-pod-network.bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0" May 17 00:22:57.200443 containerd[1701]: 2025-05-17 00:22:57.188 [INFO][6046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:57.200443 containerd[1701]: 2025-05-17 00:22:57.189 [INFO][6046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:57.200443 containerd[1701]: 2025-05-17 00:22:57.196 [WARNING][6046] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" HandleID="k8s-pod-network.bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0" May 17 00:22:57.200443 containerd[1701]: 2025-05-17 00:22:57.196 [INFO][6046] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" HandleID="k8s-pod-network.bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0" May 17 00:22:57.200443 containerd[1701]: 2025-05-17 00:22:57.197 [INFO][6046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:57.200443 containerd[1701]: 2025-05-17 00:22:57.198 [INFO][6039] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" May 17 00:22:57.202245 containerd[1701]: time="2025-05-17T00:22:57.200410186Z" level=info msg="TearDown network for sandbox \"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\" successfully" May 17 00:22:57.202383 containerd[1701]: time="2025-05-17T00:22:57.202247902Z" level=info msg="StopPodSandbox for \"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\" returns successfully" May 17 00:22:57.203423 containerd[1701]: time="2025-05-17T00:22:57.202820907Z" level=info msg="RemovePodSandbox for \"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\"" May 17 00:22:57.203423 containerd[1701]: time="2025-05-17T00:22:57.202858608Z" level=info msg="Forcibly stopping sandbox \"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\"" May 17 00:22:57.269077 containerd[1701]: 2025-05-17 00:22:57.237 [WARNING][6062] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0", GenerateName:"calico-apiserver-c46c46bf5-", Namespace:"calico-apiserver", SelfLink:"", UID:"d30fdcd3-ada9-4b49-b2d5-3ba22cf562c8", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c46c46bf5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-4e81e33f0f", ContainerID:"8a8a77f8289d92c00b1531496c86b2e3f537eb4f7af49a31ebacb5d3ff495c62", Pod:"calico-apiserver-c46c46bf5-sgprk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali794ff389f2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:57.269077 containerd[1701]: 2025-05-17 00:22:57.237 [INFO][6062] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" May 17 00:22:57.269077 containerd[1701]: 2025-05-17 00:22:57.237 [INFO][6062] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" iface="eth0" netns="" May 17 00:22:57.269077 containerd[1701]: 2025-05-17 00:22:57.237 [INFO][6062] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" May 17 00:22:57.269077 containerd[1701]: 2025-05-17 00:22:57.237 [INFO][6062] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" May 17 00:22:57.269077 containerd[1701]: 2025-05-17 00:22:57.259 [INFO][6069] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" HandleID="k8s-pod-network.bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0" May 17 00:22:57.269077 containerd[1701]: 2025-05-17 00:22:57.259 [INFO][6069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:57.269077 containerd[1701]: 2025-05-17 00:22:57.259 [INFO][6069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:57.269077 containerd[1701]: 2025-05-17 00:22:57.265 [WARNING][6069] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" HandleID="k8s-pod-network.bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0" May 17 00:22:57.269077 containerd[1701]: 2025-05-17 00:22:57.265 [INFO][6069] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" HandleID="k8s-pod-network.bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" Workload="ci--4081.3.3--n--4e81e33f0f-k8s-calico--apiserver--c46c46bf5--sgprk-eth0" May 17 00:22:57.269077 containerd[1701]: 2025-05-17 00:22:57.266 [INFO][6069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:57.269077 containerd[1701]: 2025-05-17 00:22:57.267 [INFO][6062] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d" May 17 00:22:57.269739 containerd[1701]: time="2025-05-17T00:22:57.269149584Z" level=info msg="TearDown network for sandbox \"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\" successfully" May 17 00:22:57.277487 containerd[1701]: time="2025-05-17T00:22:57.277345255Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:57.277487 containerd[1701]: time="2025-05-17T00:22:57.277431956Z" level=info msg="RemovePodSandbox \"bda2d019f502c60281e0d49bc331db3d2a83a6315bf707870fe44f35f5b5066d\" returns successfully" May 17 00:23:00.962643 containerd[1701]: time="2025-05-17T00:23:00.962545709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:23:02.628769 containerd[1701]: time="2025-05-17T00:23:02.628700901Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:23:03.078410 containerd[1701]: time="2025-05-17T00:23:03.078323611Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:23:03.078723 containerd[1701]: time="2025-05-17T00:23:03.078366012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:23:03.078796 kubelet[3166]: E0517 00:23:03.078695 3166 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:23:03.078796 kubelet[3166]: E0517 00:23:03.078768 3166 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:23:03.079411 kubelet[3166]: E0517 00:23:03.078983 3166 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4l2l6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-z6qms_calico-system(7fb9aa5a-970a-4e82-b193-535f4a3ef021): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:23:03.080742 kubelet[3166]: E0517 00:23:03.080702 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-z6qms" podUID="7fb9aa5a-970a-4e82-b193-535f4a3ef021" May 17 00:23:03.364500 update_engine[1666]: I20250517 00:23:03.364316 1666 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:23:03.365010 update_engine[1666]: I20250517 00:23:03.364656 1666 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:23:03.365010 update_engine[1666]: I20250517 00:23:03.364995 1666 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:23:03.371117 update_engine[1666]: E20250517 00:23:03.371063 1666 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:23:03.371274 update_engine[1666]: I20250517 00:23:03.371150 1666 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 17 00:23:05.965812 kubelet[3166]: E0517 00:23:05.965723 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6ffb5d99bc-xn6zf" podUID="5e5288d0-1e2c-4d58-be59-3922b5edffe0" May 17 00:23:13.365855 update_engine[1666]: I20250517 00:23:13.365232 1666 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:23:13.365855 update_engine[1666]: I20250517 00:23:13.365530 1666 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:23:13.365855 update_engine[1666]: I20250517 00:23:13.365794 1666 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:23:13.395509 update_engine[1666]: E20250517 00:23:13.395360 1666 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:23:13.395509 update_engine[1666]: I20250517 00:23:13.395467 1666 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 17 00:23:13.998688 systemd[1]: run-containerd-runc-k8s.io-755a11aece27ce4f11c88528f245c4a090288bd362110f91d974665cfec1da64-runc.80chN9.mount: Deactivated successfully. May 17 00:23:17.963273 kubelet[3166]: E0517 00:23:17.962538 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-z6qms" podUID="7fb9aa5a-970a-4e82-b193-535f4a3ef021" May 17 00:23:19.965626 containerd[1701]: time="2025-05-17T00:23:19.965477254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:23:20.148624 containerd[1701]: time="2025-05-17T00:23:20.148392146Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:23:20.153300 containerd[1701]: time="2025-05-17T00:23:20.153091487Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:23:20.153300 containerd[1701]: time="2025-05-17T00:23:20.153241888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:23:20.154296 kubelet[3166]: E0517 00:23:20.153621 3166 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:23:20.154296 kubelet[3166]: E0517 00:23:20.153679 3166 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:23:20.154296 kubelet[3166]: E0517 00:23:20.153820 3166 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cc198a9d9a44458f955616eaa1e83f23,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jbk6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6ffb5d99bc-xn6zf_calico-system(5e5288d0-1e2c-4d58-be59-3922b5edffe0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:23:20.156317 containerd[1701]: time="2025-05-17T00:23:20.156020613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:23:20.318576 containerd[1701]: time="2025-05-17T00:23:20.318422842Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:23:20.323167 containerd[1701]: time="2025-05-17T00:23:20.322825481Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:23:20.323167 containerd[1701]: time="2025-05-17T00:23:20.322953382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:23:20.323771 kubelet[3166]: E0517 00:23:20.323500 3166 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:23:20.323771 kubelet[3166]: E0517 00:23:20.323685 3166 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:23:20.324917 kubelet[3166]: E0517 00:23:20.324149 3166 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbk6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6ffb5d99bc-xn6zf_calico-system(5e5288d0-1e2c-4d58-be59-3922b5edffe0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:23:20.326316 kubelet[3166]: E0517 00:23:20.326270 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6ffb5d99bc-xn6zf" podUID="5e5288d0-1e2c-4d58-be59-3922b5edffe0" May 17 00:23:23.366148 update_engine[1666]: I20250517 00:23:23.365532 1666 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:23:23.366148 update_engine[1666]: I20250517 00:23:23.365840 1666 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:23:23.366148 update_engine[1666]: I20250517 00:23:23.366096 1666 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:23:23.373232 update_engine[1666]: E20250517 00:23:23.370678 1666 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:23:23.373232 update_engine[1666]: I20250517 00:23:23.370756 1666 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 00:23:23.373232 update_engine[1666]: I20250517 00:23:23.370769 1666 omaha_request_action.cc:617] Omaha request response: May 17 00:23:23.373232 update_engine[1666]: E20250517 00:23:23.370867 1666 omaha_request_action.cc:636] Omaha request network transfer failed. May 17 00:23:23.373232 update_engine[1666]: I20250517 00:23:23.370900 1666 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 17 00:23:23.373232 update_engine[1666]: I20250517 00:23:23.370909 1666 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:23:23.373232 update_engine[1666]: I20250517 00:23:23.370920 1666 update_attempter.cc:306] Processing Done. May 17 00:23:23.373232 update_engine[1666]: E20250517 00:23:23.370940 1666 update_attempter.cc:619] Update failed. May 17 00:23:23.373232 update_engine[1666]: I20250517 00:23:23.370949 1666 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 17 00:23:23.373232 update_engine[1666]: I20250517 00:23:23.370955 1666 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 17 00:23:23.373232 update_engine[1666]: I20250517 00:23:23.370965 1666 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 17 00:23:23.373232 update_engine[1666]: I20250517 00:23:23.371057 1666 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 00:23:23.373232 update_engine[1666]: I20250517 00:23:23.371093 1666 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 00:23:23.373232 update_engine[1666]: I20250517 00:23:23.371100 1666 omaha_request_action.cc:272] Request: May 17 00:23:23.373232 update_engine[1666]: May 17 00:23:23.373232 update_engine[1666]: May 17 00:23:23.373910 locksmithd[1715]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 17 00:23:23.374321 update_engine[1666]: May 17 00:23:23.374321 update_engine[1666]: May 17 00:23:23.374321 update_engine[1666]: May 17 00:23:23.374321 update_engine[1666]: May 17 00:23:23.374321 update_engine[1666]: I20250517 00:23:23.371109 1666 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:23:23.374321 update_engine[1666]: I20250517 00:23:23.373013 1666 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:23:23.374911 update_engine[1666]: I20250517 00:23:23.374845 1666 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:23:23.395624 update_engine[1666]: E20250517 00:23:23.395323 1666 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:23:23.395624 update_engine[1666]: I20250517 00:23:23.395428 1666 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 00:23:23.395624 update_engine[1666]: I20250517 00:23:23.395443 1666 omaha_request_action.cc:617] Omaha request response: May 17 00:23:23.395624 update_engine[1666]: I20250517 00:23:23.395454 1666 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:23:23.395624 update_engine[1666]: I20250517 00:23:23.395462 1666 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:23:23.395624 update_engine[1666]: I20250517 00:23:23.395469 1666 update_attempter.cc:306] Processing Done. May 17 00:23:23.395624 update_engine[1666]: I20250517 00:23:23.395479 1666 update_attempter.cc:310] Error event sent. May 17 00:23:23.395624 update_engine[1666]: I20250517 00:23:23.395494 1666 update_check_scheduler.cc:74] Next update check in 44m21s May 17 00:23:23.396032 locksmithd[1715]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 17 00:23:32.964715 containerd[1701]: time="2025-05-17T00:23:32.964436147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:23:32.965185 kubelet[3166]: E0517 00:23:32.964298 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6ffb5d99bc-xn6zf" podUID="5e5288d0-1e2c-4d58-be59-3922b5edffe0" May 17 00:23:33.147306 containerd[1701]: time="2025-05-17T00:23:33.147250160Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:23:33.151843 containerd[1701]: time="2025-05-17T00:23:33.151778800Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:23:33.152019 containerd[1701]: time="2025-05-17T00:23:33.151944601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:23:33.152912 kubelet[3166]: E0517 00:23:33.152299 3166 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:23:33.152912 kubelet[3166]: E0517 00:23:33.152362 3166 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:23:33.152912 kubelet[3166]: E0517 00:23:33.152534 3166 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4l2l6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-z6qms_calico-system(7fb9aa5a-970a-4e82-b193-535f4a3ef021): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:23:33.154134 kubelet[3166]: E0517 00:23:33.154094 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-z6qms" podUID="7fb9aa5a-970a-4e82-b193-535f4a3ef021" May 17 00:23:44.965917 kubelet[3166]: E0517 00:23:44.965829 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6ffb5d99bc-xn6zf" podUID="5e5288d0-1e2c-4d58-be59-3922b5edffe0" May 17 00:23:45.968938 kubelet[3166]: E0517 00:23:45.968792 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-z6qms" podUID="7fb9aa5a-970a-4e82-b193-535f4a3ef021" May 17 00:23:48.142167 systemd[1]: Started sshd@7-10.200.8.41:22-10.200.16.10:45652.service - OpenSSH per-connection server daemon (10.200.16.10:45652). May 17 00:23:48.775951 sshd[6185]: Accepted publickey for core from 10.200.16.10 port 45652 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:23:48.778689 sshd[6185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:48.786758 systemd-logind[1665]: New session 10 of user core. May 17 00:23:48.794488 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:23:49.356809 sshd[6185]: pam_unix(sshd:session): session closed for user core May 17 00:23:49.364713 systemd[1]: sshd@7-10.200.8.41:22-10.200.16.10:45652.service: Deactivated successfully. May 17 00:23:49.370431 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:23:49.372661 systemd-logind[1665]: Session 10 logged out. Waiting for processes to exit. May 17 00:23:49.375658 systemd-logind[1665]: Removed session 10. May 17 00:23:54.471598 systemd[1]: Started sshd@8-10.200.8.41:22-10.200.16.10:52016.service - OpenSSH per-connection server daemon (10.200.16.10:52016). May 17 00:23:55.095313 sshd[6200]: Accepted publickey for core from 10.200.16.10 port 52016 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:23:55.096906 sshd[6200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:55.102128 systemd-logind[1665]: New session 11 of user core. May 17 00:23:55.109412 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:23:55.607954 sshd[6200]: pam_unix(sshd:session): session closed for user core May 17 00:23:55.611913 systemd[1]: sshd@8-10.200.8.41:22-10.200.16.10:52016.service: Deactivated successfully. May 17 00:23:55.615235 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:23:55.617151 systemd-logind[1665]: Session 11 logged out. Waiting for processes to exit. May 17 00:23:55.618381 systemd-logind[1665]: Removed session 11. May 17 00:23:56.963228 kubelet[3166]: E0517 00:23:56.962905 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6ffb5d99bc-xn6zf" podUID="5e5288d0-1e2c-4d58-be59-3922b5edffe0" May 17 00:23:58.962318 kubelet[3166]: E0517 00:23:58.962233 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-z6qms" podUID="7fb9aa5a-970a-4e82-b193-535f4a3ef021" May 17 00:24:00.726801 systemd[1]: Started sshd@9-10.200.8.41:22-10.200.16.10:60912.service - OpenSSH per-connection server daemon (10.200.16.10:60912). May 17 00:24:01.361608 sshd[6242]: Accepted publickey for core from 10.200.16.10 port 60912 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:24:01.363398 sshd[6242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:01.369485 systemd-logind[1665]: New session 12 of user core. May 17 00:24:01.379397 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:24:01.874029 sshd[6242]: pam_unix(sshd:session): session closed for user core May 17 00:24:01.877992 systemd[1]: sshd@9-10.200.8.41:22-10.200.16.10:60912.service: Deactivated successfully. May 17 00:24:01.880376 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:24:01.881259 systemd-logind[1665]: Session 12 logged out. Waiting for processes to exit. May 17 00:24:01.882401 systemd-logind[1665]: Removed session 12. May 17 00:24:01.990538 systemd[1]: Started sshd@10-10.200.8.41:22-10.200.16.10:60914.service - OpenSSH per-connection server daemon (10.200.16.10:60914). May 17 00:24:02.610592 sshd[6258]: Accepted publickey for core from 10.200.16.10 port 60914 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:24:02.612398 sshd[6258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:02.617535 systemd-logind[1665]: New session 13 of user core. May 17 00:24:02.624369 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:24:03.164414 sshd[6258]: pam_unix(sshd:session): session closed for user core May 17 00:24:03.167585 systemd[1]: sshd@10-10.200.8.41:22-10.200.16.10:60914.service: Deactivated successfully. May 17 00:24:03.170070 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:24:03.171912 systemd-logind[1665]: Session 13 logged out. Waiting for processes to exit. May 17 00:24:03.173772 systemd-logind[1665]: Removed session 13. May 17 00:24:03.279534 systemd[1]: Started sshd@11-10.200.8.41:22-10.200.16.10:60920.service - OpenSSH per-connection server daemon (10.200.16.10:60920). May 17 00:24:03.900184 sshd[6269]: Accepted publickey for core from 10.200.16.10 port 60920 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:24:03.901869 sshd[6269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:03.906037 systemd-logind[1665]: New session 14 of user core. May 17 00:24:03.912384 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:24:04.409561 sshd[6269]: pam_unix(sshd:session): session closed for user core May 17 00:24:04.414520 systemd[1]: sshd@11-10.200.8.41:22-10.200.16.10:60920.service: Deactivated successfully. May 17 00:24:04.417277 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:24:04.418321 systemd-logind[1665]: Session 14 logged out. Waiting for processes to exit. May 17 00:24:04.419721 systemd-logind[1665]: Removed session 14. May 17 00:24:09.526563 systemd[1]: Started sshd@12-10.200.8.41:22-10.200.16.10:39990.service - OpenSSH per-connection server daemon (10.200.16.10:39990). May 17 00:24:10.149251 sshd[6317]: Accepted publickey for core from 10.200.16.10 port 39990 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:24:10.151030 sshd[6317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:10.155842 systemd-logind[1665]: New session 15 of user core. May 17 00:24:10.162383 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:24:10.655574 sshd[6317]: pam_unix(sshd:session): session closed for user core May 17 00:24:10.659647 systemd-logind[1665]: Session 15 logged out. Waiting for processes to exit. May 17 00:24:10.660289 systemd[1]: sshd@12-10.200.8.41:22-10.200.16.10:39990.service: Deactivated successfully. May 17 00:24:10.662801 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:24:10.664362 systemd-logind[1665]: Removed session 15. May 17 00:24:10.963116 containerd[1701]: time="2025-05-17T00:24:10.962250065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:24:11.131791 containerd[1701]: time="2025-05-17T00:24:11.131727758Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:11.135532 containerd[1701]: time="2025-05-17T00:24:11.135483591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:11.135671 containerd[1701]: time="2025-05-17T00:24:11.135589892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:24:11.135847 kubelet[3166]: E0517 00:24:11.135792 3166 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:24:11.136472 kubelet[3166]: E0517 00:24:11.135855 3166 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:24:11.136472 kubelet[3166]: E0517 00:24:11.136011 3166 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cc198a9d9a44458f955616eaa1e83f23,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jbk6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6ffb5d99bc-xn6zf_calico-system(5e5288d0-1e2c-4d58-be59-3922b5edffe0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:11.138706 containerd[1701]: time="2025-05-17T00:24:11.138673219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:24:11.306413 containerd[1701]: time="2025-05-17T00:24:11.306349296Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:11.310102 containerd[1701]: time="2025-05-17T00:24:11.310055329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:11.310242 containerd[1701]: time="2025-05-17T00:24:11.310156930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:24:11.310445 kubelet[3166]: E0517 00:24:11.310368 3166 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:24:11.310589 kubelet[3166]: E0517 00:24:11.310440 3166 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:24:11.310659 kubelet[3166]: E0517 00:24:11.310604 3166 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbk6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6ffb5d99bc-xn6zf_calico-system(5e5288d0-1e2c-4d58-be59-3922b5edffe0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:11.312339 kubelet[3166]: E0517 00:24:11.312230 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6ffb5d99bc-xn6zf" podUID="5e5288d0-1e2c-4d58-be59-3922b5edffe0" May 17 00:24:13.965916 containerd[1701]: time="2025-05-17T00:24:13.965860721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:24:14.206697 containerd[1701]: time="2025-05-17T00:24:14.206635642Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:14.212504 containerd[1701]: time="2025-05-17T00:24:14.212403292Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:14.212719 containerd[1701]: time="2025-05-17T00:24:14.212439893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:24:14.212893 kubelet[3166]: E0517 00:24:14.212827 3166 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:24:14.213629 kubelet[3166]: E0517 00:24:14.212909 3166 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:24:14.213629 kubelet[3166]: E0517 00:24:14.213133 3166 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4l2l6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-z6qms_calico-system(7fb9aa5a-970a-4e82-b193-535f4a3ef021): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:14.214562 kubelet[3166]: E0517 00:24:14.214494 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-z6qms" podUID="7fb9aa5a-970a-4e82-b193-535f4a3ef021" May 17 00:24:15.770560 systemd[1]: Started sshd@13-10.200.8.41:22-10.200.16.10:40002.service - OpenSSH per-connection server daemon (10.200.16.10:40002). May 17 00:24:16.390302 sshd[6362]: Accepted publickey for core from 10.200.16.10 port 40002 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:24:16.391988 sshd[6362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:16.397495 systemd-logind[1665]: New session 16 of user core. May 17 00:24:16.400383 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:24:16.923379 sshd[6362]: pam_unix(sshd:session): session closed for user core May 17 00:24:16.928058 systemd[1]: sshd@13-10.200.8.41:22-10.200.16.10:40002.service: Deactivated successfully. May 17 00:24:16.930886 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:24:16.931806 systemd-logind[1665]: Session 16 logged out. Waiting for processes to exit. May 17 00:24:16.932884 systemd-logind[1665]: Removed session 16. May 17 00:24:22.036540 systemd[1]: Started sshd@14-10.200.8.41:22-10.200.16.10:58274.service - OpenSSH per-connection server daemon (10.200.16.10:58274). May 17 00:24:22.658357 sshd[6375]: Accepted publickey for core from 10.200.16.10 port 58274 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:24:22.660903 sshd[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:22.669412 systemd-logind[1665]: New session 17 of user core. May 17 00:24:22.674413 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:24:23.209303 sshd[6375]: pam_unix(sshd:session): session closed for user core May 17 00:24:23.213871 systemd-logind[1665]: Session 17 logged out. Waiting for processes to exit. May 17 00:24:23.216681 systemd[1]: sshd@14-10.200.8.41:22-10.200.16.10:58274.service: Deactivated successfully. May 17 00:24:23.219341 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:24:23.227054 systemd-logind[1665]: Removed session 17. May 17 00:24:25.964835 kubelet[3166]: E0517 00:24:25.964580 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6ffb5d99bc-xn6zf" podUID="5e5288d0-1e2c-4d58-be59-3922b5edffe0" May 17 00:24:28.325497 systemd[1]: Started sshd@15-10.200.8.41:22-10.200.16.10:58278.service - OpenSSH per-connection server daemon (10.200.16.10:58278). May 17 00:24:28.945975 sshd[6407]: Accepted publickey for core from 10.200.16.10 port 58278 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:24:28.947574 sshd[6407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:28.952160 systemd-logind[1665]: New session 18 of user core. May 17 00:24:28.959358 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:24:29.453603 sshd[6407]: pam_unix(sshd:session): session closed for user core May 17 00:24:29.457390 systemd-logind[1665]: Session 18 logged out. Waiting for processes to exit. May 17 00:24:29.458221 systemd[1]: sshd@15-10.200.8.41:22-10.200.16.10:58278.service: Deactivated successfully. May 17 00:24:29.460624 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:24:29.461702 systemd-logind[1665]: Removed session 18. May 17 00:24:29.572600 systemd[1]: Started sshd@16-10.200.8.41:22-10.200.16.10:39006.service - OpenSSH per-connection server daemon (10.200.16.10:39006). May 17 00:24:29.964089 kubelet[3166]: E0517 00:24:29.963815 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-z6qms" podUID="7fb9aa5a-970a-4e82-b193-535f4a3ef021" May 17 00:24:30.192551 sshd[6420]: Accepted publickey for core from 10.200.16.10 port 39006 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:24:30.194228 sshd[6420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:30.198938 systemd-logind[1665]: New session 19 of user core. May 17 00:24:30.203551 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:24:30.759084 sshd[6420]: pam_unix(sshd:session): session closed for user core May 17 00:24:30.762665 systemd[1]: sshd@16-10.200.8.41:22-10.200.16.10:39006.service: Deactivated successfully. May 17 00:24:30.765351 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:24:30.767029 systemd-logind[1665]: Session 19 logged out. Waiting for processes to exit. May 17 00:24:30.768471 systemd-logind[1665]: Removed session 19. May 17 00:24:30.875395 systemd[1]: Started sshd@17-10.200.8.41:22-10.200.16.10:39014.service - OpenSSH per-connection server daemon (10.200.16.10:39014). May 17 00:24:31.494330 sshd[6430]: Accepted publickey for core from 10.200.16.10 port 39014 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:24:31.496108 sshd[6430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:31.501514 systemd-logind[1665]: New session 20 of user core. May 17 00:24:31.506385 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:24:32.899751 sshd[6430]: pam_unix(sshd:session): session closed for user core May 17 00:24:32.903932 systemd[1]: sshd@17-10.200.8.41:22-10.200.16.10:39014.service: Deactivated successfully. May 17 00:24:32.906357 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:24:32.907147 systemd-logind[1665]: Session 20 logged out. Waiting for processes to exit. May 17 00:24:32.908469 systemd-logind[1665]: Removed session 20. May 17 00:24:33.014499 systemd[1]: Started sshd@18-10.200.8.41:22-10.200.16.10:39030.service - OpenSSH per-connection server daemon (10.200.16.10:39030). May 17 00:24:33.648453 sshd[6450]: Accepted publickey for core from 10.200.16.10 port 39030 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:24:33.649057 sshd[6450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:33.653277 systemd-logind[1665]: New session 21 of user core. May 17 00:24:33.659356 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:24:34.309753 sshd[6450]: pam_unix(sshd:session): session closed for user core May 17 00:24:34.314751 systemd[1]: sshd@18-10.200.8.41:22-10.200.16.10:39030.service: Deactivated successfully. May 17 00:24:34.316833 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:24:34.317917 systemd-logind[1665]: Session 21 logged out. Waiting for processes to exit. May 17 00:24:34.319033 systemd-logind[1665]: Removed session 21. May 17 00:24:34.425521 systemd[1]: Started sshd@19-10.200.8.41:22-10.200.16.10:39034.service - OpenSSH per-connection server daemon (10.200.16.10:39034). May 17 00:24:35.046597 sshd[6461]: Accepted publickey for core from 10.200.16.10 port 39034 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:24:35.048925 sshd[6461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:35.053260 systemd-logind[1665]: New session 22 of user core. May 17 00:24:35.058535 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 00:24:35.550141 sshd[6461]: pam_unix(sshd:session): session closed for user core May 17 00:24:35.554969 systemd[1]: sshd@19-10.200.8.41:22-10.200.16.10:39034.service: Deactivated successfully. May 17 00:24:35.557099 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:24:35.557934 systemd-logind[1665]: Session 22 logged out. Waiting for processes to exit. May 17 00:24:35.558929 systemd-logind[1665]: Removed session 22. May 17 00:24:36.963735 kubelet[3166]: E0517 00:24:36.963659 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6ffb5d99bc-xn6zf" podUID="5e5288d0-1e2c-4d58-be59-3922b5edffe0" May 17 00:24:40.668537 systemd[1]: Started sshd@20-10.200.8.41:22-10.200.16.10:40088.service - OpenSSH per-connection server daemon (10.200.16.10:40088). May 17 00:24:41.293589 sshd[6497]: Accepted publickey for core from 10.200.16.10 port 40088 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:24:41.295164 sshd[6497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:41.299331 systemd-logind[1665]: New session 23 of user core. May 17 00:24:41.303380 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 00:24:41.828095 sshd[6497]: pam_unix(sshd:session): session closed for user core May 17 00:24:41.832358 systemd[1]: sshd@20-10.200.8.41:22-10.200.16.10:40088.service: Deactivated successfully. May 17 00:24:41.834383 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:24:41.835675 systemd-logind[1665]: Session 23 logged out. Waiting for processes to exit. May 17 00:24:41.837129 systemd-logind[1665]: Removed session 23. May 17 00:24:42.962845 kubelet[3166]: E0517 00:24:42.962641 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-z6qms" podUID="7fb9aa5a-970a-4e82-b193-535f4a3ef021" May 17 00:24:46.955352 systemd[1]: Started sshd@21-10.200.8.41:22-10.200.16.10:40094.service - OpenSSH per-connection server daemon (10.200.16.10:40094). May 17 00:24:47.586759 sshd[6512]: Accepted publickey for core from 10.200.16.10 port 40094 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:24:47.590334 sshd[6512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:47.602067 systemd-logind[1665]: New session 24 of user core. May 17 00:24:47.605459 systemd[1]: Started session-24.scope - Session 24 of User core. May 17 00:24:48.123861 sshd[6512]: pam_unix(sshd:session): session closed for user core May 17 00:24:48.131611 systemd[1]: sshd@21-10.200.8.41:22-10.200.16.10:40094.service: Deactivated successfully. May 17 00:24:48.135971 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:24:48.137309 systemd-logind[1665]: Session 24 logged out. Waiting for processes to exit. May 17 00:24:48.138728 systemd-logind[1665]: Removed session 24. May 17 00:24:50.964586 kubelet[3166]: E0517 00:24:50.964493 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6ffb5d99bc-xn6zf" podUID="5e5288d0-1e2c-4d58-be59-3922b5edffe0" May 17 00:24:53.244626 systemd[1]: Started sshd@22-10.200.8.41:22-10.200.16.10:52190.service - OpenSSH per-connection server daemon (10.200.16.10:52190). May 17 00:24:53.869727 sshd[6525]: Accepted publickey for core from 10.200.16.10 port 52190 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:24:53.873240 sshd[6525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:53.882335 systemd-logind[1665]: New session 25 of user core. May 17 00:24:53.889221 systemd[1]: Started session-25.scope - Session 25 of User core. May 17 00:24:54.413502 sshd[6525]: pam_unix(sshd:session): session closed for user core May 17 00:24:54.418401 systemd-logind[1665]: Session 25 logged out. Waiting for processes to exit. May 17 00:24:54.419473 systemd[1]: sshd@22-10.200.8.41:22-10.200.16.10:52190.service: Deactivated successfully. May 17 00:24:54.424150 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:24:54.428462 systemd-logind[1665]: Removed session 25. May 17 00:24:55.427962 systemd[1]: run-containerd-runc-k8s.io-755a11aece27ce4f11c88528f245c4a090288bd362110f91d974665cfec1da64-runc.ItLWXD.mount: Deactivated successfully. May 17 00:24:57.962213 kubelet[3166]: E0517 00:24:57.962089 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-z6qms" podUID="7fb9aa5a-970a-4e82-b193-535f4a3ef021" May 17 00:24:59.527595 systemd[1]: Started sshd@23-10.200.8.41:22-10.200.16.10:48388.service - OpenSSH per-connection server daemon (10.200.16.10:48388). May 17 00:25:00.159011 sshd[6558]: Accepted publickey for core from 10.200.16.10 port 48388 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:25:00.160679 sshd[6558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:00.166073 systemd-logind[1665]: New session 26 of user core. May 17 00:25:00.171366 systemd[1]: Started session-26.scope - Session 26 of User core. May 17 00:25:00.667587 sshd[6558]: pam_unix(sshd:session): session closed for user core May 17 00:25:00.671921 systemd[1]: sshd@23-10.200.8.41:22-10.200.16.10:48388.service: Deactivated successfully. May 17 00:25:00.674375 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:25:00.675555 systemd-logind[1665]: Session 26 logged out. Waiting for processes to exit. May 17 00:25:00.676596 systemd-logind[1665]: Removed session 26. May 17 00:25:02.963466 kubelet[3166]: E0517 00:25:02.963396 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6ffb5d99bc-xn6zf" podUID="5e5288d0-1e2c-4d58-be59-3922b5edffe0" May 17 00:25:05.782535 systemd[1]: Started sshd@24-10.200.8.41:22-10.200.16.10:48398.service - OpenSSH per-connection server daemon (10.200.16.10:48398). May 17 00:25:06.408580 sshd[6573]: Accepted publickey for core from 10.200.16.10 port 48398 ssh2: RSA SHA256:zuj+13EgbQGBPytUFw1nZkyIFh5si2H0nzF3PiVxpYA May 17 00:25:06.410273 sshd[6573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:25:06.415073 systemd-logind[1665]: New session 27 of user core. May 17 00:25:06.423381 systemd[1]: Started session-27.scope - Session 27 of User core. May 17 00:25:06.913165 sshd[6573]: pam_unix(sshd:session): session closed for user core May 17 00:25:06.917305 systemd[1]: sshd@24-10.200.8.41:22-10.200.16.10:48398.service: Deactivated successfully. May 17 00:25:06.920841 systemd[1]: session-27.scope: Deactivated successfully. May 17 00:25:06.921929 systemd-logind[1665]: Session 27 logged out. Waiting for processes to exit. May 17 00:25:06.924236 systemd-logind[1665]: Removed session 27. May 17 00:25:10.961294 kubelet[3166]: E0517 00:25:10.961230 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-z6qms" podUID="7fb9aa5a-970a-4e82-b193-535f4a3ef021" May 17 00:25:11.059634 kubelet[3166]: E0517 00:25:11.059386 3166 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: EOF" event="&Event{ObjectMeta:{goldmane-78d55f7ddc-z6qms.184028aa4e095cfb calico-system 1616 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-78d55f7ddc-z6qms,UID:7fb9aa5a-970a-4e82-b193-535f4a3ef021,APIVersion:v1,ResourceVersion:846,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\",Source:EventSource{Component:kubelet,Host:ci-4081.3.3-n-4e81e33f0f,},FirstTimestamp:2025-05-17 00:22:47 +0000 UTC,LastTimestamp:2025-05-17 00:25:10.961142363 +0000 UTC m=+195.121668500,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-n-4e81e33f0f,}"