May 27 17:47:00.937035 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 15:32:02 -00 2025 May 27 17:47:00.937059 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:47:00.937068 kernel: BIOS-provided physical RAM map: May 27 17:47:00.937074 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 27 17:47:00.937080 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved May 27 17:47:00.937086 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable May 27 17:47:00.937094 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc4fff] reserved May 27 17:47:00.937100 kernel: BIOS-e820: [mem 0x000000003ffc5000-0x000000003ffd0fff] usable May 27 17:47:00.937106 kernel: BIOS-e820: [mem 0x000000003ffd1000-0x000000003fffafff] ACPI data May 27 17:47:00.937112 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS May 27 17:47:00.937118 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable May 27 17:47:00.937124 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable May 27 17:47:00.937129 kernel: printk: legacy bootconsole [earlyser0] enabled May 27 17:47:00.937135 kernel: NX (Execute Disable) protection: active May 27 17:47:00.937144 kernel: APIC: Static calls initialized May 27 17:47:00.937151 kernel: efi: EFI v2.7 by Microsoft May 27 17:47:00.937157 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ebb9a98 RNG=0x3ffd2018 May 27 17:47:00.937164 kernel: random: crng init done May 27 17:47:00.937170 kernel: secureboot: Secure boot disabled May 27 17:47:00.937176 kernel: SMBIOS 3.1.0 present. May 27 17:47:00.937183 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/21/2024 May 27 17:47:00.937189 kernel: DMI: Memory slots populated: 2/2 May 27 17:47:00.937196 kernel: Hypervisor detected: Microsoft Hyper-V May 27 17:47:00.937202 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 May 27 17:47:00.937209 kernel: Hyper-V: Nested features: 0x3e0101 May 27 17:47:00.937215 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 May 27 17:47:00.937221 kernel: Hyper-V: Using hypercall for remote TLB flush May 27 17:47:00.937228 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 27 17:47:00.937234 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 27 17:47:00.937240 kernel: tsc: Detected 2299.999 MHz processor May 27 17:47:00.937246 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 17:47:00.937253 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 17:47:00.937260 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 May 27 17:47:00.937268 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 27 17:47:00.937275 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 17:47:00.937281 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved May 27 17:47:00.937288 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 May 27 17:47:00.937294 kernel: Using GB pages for direct mapping May 27 17:47:00.937301 kernel: ACPI: Early table checksum verification disabled May 27 17:47:00.937307 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) May 27 17:47:00.937317 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:47:00.937325 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:47:00.937331 kernel: ACPI: DSDT 0x000000003FFD6000 01E11C (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 27 17:47:00.937338 kernel: ACPI: FACS 0x000000003FFFE000 000040 May 27 17:47:00.937345 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:47:00.937352 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:47:00.937361 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:47:00.937367 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) May 27 17:47:00.937374 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) May 27 17:47:00.937381 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:47:00.937388 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] May 27 17:47:00.937394 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff411b] May 27 17:47:00.937401 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] May 27 17:47:00.937408 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] May 27 17:47:00.937414 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] May 27 17:47:00.937423 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] May 27 17:47:00.937430 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] May 27 17:47:00.937436 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] May 27 17:47:00.937443 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] May 27 17:47:00.937450 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] May 27 17:47:00.937457 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] May 27 17:47:00.937464 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] May 27 17:47:00.937471 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] May 27 17:47:00.937477 kernel: Zone ranges: May 27 17:47:00.937486 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 17:47:00.937492 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 27 17:47:00.937499 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] May 27 17:47:00.937506 kernel: Device empty May 27 17:47:00.937513 kernel: Movable zone start for each node May 27 17:47:00.937520 kernel: Early memory node ranges May 27 17:47:00.937527 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 27 17:47:00.937533 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] May 27 17:47:00.937540 kernel: node 0: [mem 0x000000003ffc5000-0x000000003ffd0fff] May 27 17:47:00.937548 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] May 27 17:47:00.937554 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] May 27 17:47:00.937561 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] May 27 17:47:00.937568 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 17:47:00.937583 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 27 17:47:00.937590 kernel: On node 0, zone DMA32: 132 pages in unavailable ranges May 27 17:47:00.937597 kernel: On node 0, zone DMA32: 46 pages in unavailable ranges May 27 17:47:00.937604 kernel: ACPI: PM-Timer IO Port: 0x408 May 27 17:47:00.937611 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 17:47:00.937619 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 17:47:00.937626 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 17:47:00.937633 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 May 27 17:47:00.937640 kernel: TSC deadline timer available May 27 17:47:00.937646 kernel: CPU topo: Max. logical packages: 1 May 27 17:47:00.937653 kernel: CPU topo: Max. logical dies: 1 May 27 17:47:00.937660 kernel: CPU topo: Max. dies per package: 1 May 27 17:47:00.937667 kernel: CPU topo: Max. threads per core: 2 May 27 17:47:00.937674 kernel: CPU topo: Num. cores per package: 1 May 27 17:47:00.937681 kernel: CPU topo: Num. threads per package: 2 May 27 17:47:00.937687 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 27 17:47:00.937694 kernel: [mem 0x40000000-0xffffffff] available for PCI devices May 27 17:47:00.937700 kernel: Booting paravirtualized kernel on Hyper-V May 27 17:47:00.937707 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 17:47:00.937714 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 27 17:47:00.937720 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 27 17:47:00.937728 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 27 17:47:00.937734 kernel: pcpu-alloc: [0] 0 1 May 27 17:47:00.937743 kernel: Hyper-V: PV spinlocks enabled May 27 17:47:00.937750 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 17:47:00.937758 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:47:00.937766 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 17:47:00.937772 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) May 27 17:47:00.937779 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 17:47:00.937786 kernel: Fallback order for Node 0: 0 May 27 17:47:00.937793 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2096877 May 27 17:47:00.937802 kernel: Policy zone: Normal May 27 17:47:00.937809 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 17:47:00.937816 kernel: software IO TLB: area num 2. May 27 17:47:00.937823 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 17:47:00.937830 kernel: ftrace: allocating 40081 entries in 157 pages May 27 17:47:00.937837 kernel: ftrace: allocated 157 pages with 5 groups May 27 17:47:00.937844 kernel: Dynamic Preempt: voluntary May 27 17:47:00.937850 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 17:47:00.937858 kernel: rcu: RCU event tracing is enabled. May 27 17:47:00.937867 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 17:47:00.937880 kernel: Trampoline variant of Tasks RCU enabled. May 27 17:47:00.937888 kernel: Rude variant of Tasks RCU enabled. May 27 17:47:00.937897 kernel: Tracing variant of Tasks RCU enabled. May 27 17:47:00.937905 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 17:47:00.937913 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 17:47:00.937921 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:47:00.937929 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:47:00.937936 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:47:00.937944 kernel: Using NULL legacy PIC May 27 17:47:00.937952 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 May 27 17:47:00.937961 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 17:47:00.937969 kernel: Console: colour dummy device 80x25 May 27 17:47:00.937977 kernel: printk: legacy console [tty1] enabled May 27 17:47:00.937984 kernel: printk: legacy console [ttyS0] enabled May 27 17:47:00.937992 kernel: printk: legacy bootconsole [earlyser0] disabled May 27 17:47:00.938000 kernel: ACPI: Core revision 20240827 May 27 17:47:00.938009 kernel: Failed to register legacy timer interrupt May 27 17:47:00.938016 kernel: APIC: Switch to symmetric I/O mode setup May 27 17:47:00.938024 kernel: x2apic enabled May 27 17:47:00.938032 kernel: APIC: Switched APIC routing to: physical x2apic May 27 17:47:00.938040 kernel: Hyper-V: Host Build 10.0.26100.1221-1-0 May 27 17:47:00.938047 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 27 17:47:00.938055 kernel: Hyper-V: Disabling IBT because of Hyper-V bug May 27 17:47:00.938063 kernel: Hyper-V: Using IPI hypercalls May 27 17:47:00.938071 kernel: APIC: send_IPI() replaced with hv_send_ipi() May 27 17:47:00.938080 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() May 27 17:47:00.938088 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() May 27 17:47:00.938096 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() May 27 17:47:00.938104 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() May 27 17:47:00.938112 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() May 27 17:47:00.938120 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns May 27 17:47:00.938128 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) May 27 17:47:00.938136 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 27 17:47:00.938144 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 27 17:47:00.938153 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 27 17:47:00.938160 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 17:47:00.938167 kernel: Spectre V2 : Mitigation: Retpolines May 27 17:47:00.938175 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 17:47:00.938183 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 27 17:47:00.938190 kernel: RETBleed: Vulnerable May 27 17:47:00.938198 kernel: Speculative Store Bypass: Vulnerable May 27 17:47:00.938206 kernel: ITS: Mitigation: Aligned branch/return thunks May 27 17:47:00.938213 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 17:47:00.938221 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 17:47:00.938228 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 17:47:00.938237 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 27 17:47:00.938244 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 27 17:47:00.938251 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 27 17:47:00.938257 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' May 27 17:47:00.938264 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' May 27 17:47:00.938271 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' May 27 17:47:00.938278 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 17:47:00.938285 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 May 27 17:47:00.938292 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 May 27 17:47:00.938299 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 May 27 17:47:00.938307 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 May 27 17:47:00.938315 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 May 27 17:47:00.938322 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 May 27 17:47:00.938329 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. May 27 17:47:00.938335 kernel: Freeing SMP alternatives memory: 32K May 27 17:47:00.938343 kernel: pid_max: default: 32768 minimum: 301 May 27 17:47:00.938350 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 17:47:00.938357 kernel: landlock: Up and running. May 27 17:47:00.938364 kernel: SELinux: Initializing. May 27 17:47:00.938372 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 27 17:47:00.938379 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 27 17:47:00.938386 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) May 27 17:47:00.938395 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. May 27 17:47:00.938402 kernel: signal: max sigframe size: 11952 May 27 17:47:00.938409 kernel: rcu: Hierarchical SRCU implementation. May 27 17:47:00.938417 kernel: rcu: Max phase no-delay instances is 400. May 27 17:47:00.938424 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 17:47:00.938431 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 27 17:47:00.938438 kernel: smp: Bringing up secondary CPUs ... May 27 17:47:00.938445 kernel: smpboot: x86: Booting SMP configuration: May 27 17:47:00.938452 kernel: .... node #0, CPUs: #1 May 27 17:47:00.938461 kernel: smp: Brought up 1 node, 2 CPUs May 27 17:47:00.938468 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) May 27 17:47:00.938475 kernel: Memory: 8082308K/8387508K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 299992K reserved, 0K cma-reserved) May 27 17:47:00.938483 kernel: devtmpfs: initialized May 27 17:47:00.938491 kernel: x86/mm: Memory block size: 128MB May 27 17:47:00.938498 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) May 27 17:47:00.938506 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 17:47:00.938512 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 17:47:00.938519 kernel: pinctrl core: initialized pinctrl subsystem May 27 17:47:00.938528 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 17:47:00.938535 kernel: audit: initializing netlink subsys (disabled) May 27 17:47:00.938542 kernel: audit: type=2000 audit(1748368018.055:1): state=initialized audit_enabled=0 res=1 May 27 17:47:00.938550 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 17:47:00.938558 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 17:47:00.938566 kernel: cpuidle: using governor menu May 27 17:47:00.938592 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 17:47:00.938601 kernel: dca service started, version 1.12.1 May 27 17:47:00.938609 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] May 27 17:47:00.938619 kernel: e820: reserve RAM buffer [mem 0x3ffd1000-0x3fffffff] May 27 17:47:00.938627 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 17:47:00.938635 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 17:47:00.938643 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 17:47:00.938651 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 17:47:00.938659 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 17:47:00.938667 kernel: ACPI: Added _OSI(Module Device) May 27 17:47:00.938674 kernel: ACPI: Added _OSI(Processor Device) May 27 17:47:00.938682 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 17:47:00.938691 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 17:47:00.938699 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 17:47:00.938707 kernel: ACPI: Interpreter enabled May 27 17:47:00.938714 kernel: ACPI: PM: (supports S0 S5) May 27 17:47:00.938721 kernel: ACPI: Using IOAPIC for interrupt routing May 27 17:47:00.938729 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 17:47:00.938737 kernel: PCI: Ignoring E820 reservations for host bridge windows May 27 17:47:00.938744 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F May 27 17:47:00.938752 kernel: iommu: Default domain type: Translated May 27 17:47:00.938760 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 17:47:00.938767 kernel: efivars: Registered efivars operations May 27 17:47:00.938774 kernel: PCI: Using ACPI for IRQ routing May 27 17:47:00.938781 kernel: PCI: System does not support PCI May 27 17:47:00.938786 kernel: vgaarb: loaded May 27 17:47:00.938790 kernel: clocksource: Switched to clocksource tsc-early May 27 17:47:00.938795 kernel: VFS: Disk quotas dquot_6.6.0 May 27 17:47:00.938800 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 17:47:00.938805 kernel: pnp: PnP ACPI init May 27 17:47:00.938811 kernel: pnp: PnP ACPI: found 3 devices May 27 17:47:00.938815 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 17:47:00.938820 kernel: NET: Registered PF_INET protocol family May 27 17:47:00.938825 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 27 17:47:00.938830 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) May 27 17:47:00.938835 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 17:47:00.938839 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 17:47:00.938844 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 27 17:47:00.938848 kernel: TCP: Hash tables configured (established 65536 bind 65536) May 27 17:47:00.938854 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) May 27 17:47:00.938859 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) May 27 17:47:00.938863 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 17:47:00.938868 kernel: NET: Registered PF_XDP protocol family May 27 17:47:00.938873 kernel: PCI: CLS 0 bytes, default 64 May 27 17:47:00.938877 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 27 17:47:00.938882 kernel: software IO TLB: mapped [mem 0x000000003aa59000-0x000000003ea59000] (64MB) May 27 17:47:00.938887 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer May 27 17:47:00.938891 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules May 27 17:47:00.938897 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns May 27 17:47:00.938902 kernel: clocksource: Switched to clocksource tsc May 27 17:47:00.938906 kernel: Initialise system trusted keyrings May 27 17:47:00.938911 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 May 27 17:47:00.938915 kernel: Key type asymmetric registered May 27 17:47:00.938920 kernel: Asymmetric key parser 'x509' registered May 27 17:47:00.938925 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 17:47:00.938929 kernel: io scheduler mq-deadline registered May 27 17:47:00.938934 kernel: io scheduler kyber registered May 27 17:47:00.938940 kernel: io scheduler bfq registered May 27 17:47:00.938944 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 17:47:00.938949 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 17:47:00.938954 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 17:47:00.938959 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 27 17:47:00.938963 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A May 27 17:47:00.938968 kernel: i8042: PNP: No PS/2 controller found. May 27 17:47:00.939056 kernel: rtc_cmos 00:02: registered as rtc0 May 27 17:47:00.939103 kernel: rtc_cmos 00:02: setting system clock to 2025-05-27T17:47:00 UTC (1748368020) May 27 17:47:00.939144 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram May 27 17:47:00.939149 kernel: intel_pstate: Intel P-state driver initializing May 27 17:47:00.939154 kernel: efifb: probing for efifb May 27 17:47:00.939159 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 27 17:47:00.939163 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 27 17:47:00.939168 kernel: efifb: scrolling: redraw May 27 17:47:00.939173 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 27 17:47:00.939179 kernel: Console: switching to colour frame buffer device 128x48 May 27 17:47:00.939183 kernel: fb0: EFI VGA frame buffer device May 27 17:47:00.939188 kernel: pstore: Using crash dump compression: deflate May 27 17:47:00.939192 kernel: pstore: Registered efi_pstore as persistent store backend May 27 17:47:00.939197 kernel: NET: Registered PF_INET6 protocol family May 27 17:47:00.939202 kernel: Segment Routing with IPv6 May 27 17:47:00.939206 kernel: In-situ OAM (IOAM) with IPv6 May 27 17:47:00.939211 kernel: NET: Registered PF_PACKET protocol family May 27 17:47:00.939215 kernel: Key type dns_resolver registered May 27 17:47:00.939221 kernel: IPI shorthand broadcast: enabled May 27 17:47:00.939226 kernel: sched_clock: Marking stable (2760095349, 83136502)->(3112503868, -269272017) May 27 17:47:00.939230 kernel: registered taskstats version 1 May 27 17:47:00.939235 kernel: Loading compiled-in X.509 certificates May 27 17:47:00.939240 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 9507e5c390e18536b38d58c90da64baf0ac9837c' May 27 17:47:00.939270 kernel: Demotion targets for Node 0: null May 27 17:47:00.939275 kernel: Key type .fscrypt registered May 27 17:47:00.939279 kernel: Key type fscrypt-provisioning registered May 27 17:47:00.939284 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 17:47:00.939290 kernel: ima: Allocated hash algorithm: sha1 May 27 17:47:00.939295 kernel: ima: No architecture policies found May 27 17:47:00.939299 kernel: clk: Disabling unused clocks May 27 17:47:00.939304 kernel: Warning: unable to open an initial console. May 27 17:47:00.939309 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 17:47:00.939314 kernel: Write protecting the kernel read-only data: 24576k May 27 17:47:00.939318 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 17:47:00.939323 kernel: Run /init as init process May 27 17:47:00.939327 kernel: with arguments: May 27 17:47:00.939333 kernel: /init May 27 17:47:00.939338 kernel: with environment: May 27 17:47:00.939342 kernel: HOME=/ May 27 17:47:00.939347 kernel: TERM=linux May 27 17:47:00.939352 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 17:47:00.939357 systemd[1]: Successfully made /usr/ read-only. May 27 17:47:00.939365 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:47:00.939370 systemd[1]: Detected virtualization microsoft. May 27 17:47:00.939377 systemd[1]: Detected architecture x86-64. May 27 17:47:00.939382 systemd[1]: Running in initrd. May 27 17:47:00.939386 systemd[1]: No hostname configured, using default hostname. May 27 17:47:00.939392 systemd[1]: Hostname set to . May 27 17:47:00.939396 systemd[1]: Initializing machine ID from random generator. May 27 17:47:00.939401 systemd[1]: Queued start job for default target initrd.target. May 27 17:47:00.939406 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:47:00.939411 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:47:00.939418 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 17:47:00.939423 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:47:00.939428 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 17:47:00.939434 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 17:47:00.939440 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 17:47:00.939445 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 17:47:00.939450 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:47:00.939456 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:47:00.939461 systemd[1]: Reached target paths.target - Path Units. May 27 17:47:00.939466 systemd[1]: Reached target slices.target - Slice Units. May 27 17:47:00.939471 systemd[1]: Reached target swap.target - Swaps. May 27 17:47:00.939476 systemd[1]: Reached target timers.target - Timer Units. May 27 17:47:00.939481 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:47:00.939486 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:47:00.939491 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 17:47:00.939498 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 17:47:00.939503 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:47:00.939508 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:47:00.939513 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:47:00.939518 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:47:00.939523 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 17:47:00.939528 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:47:00.939533 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 17:47:00.939538 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 17:47:00.939545 systemd[1]: Starting systemd-fsck-usr.service... May 27 17:47:00.939550 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:47:00.939555 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:47:00.939567 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:47:00.939608 systemd-journald[205]: Collecting audit messages is disabled. May 27 17:47:00.939632 systemd-journald[205]: Journal started May 27 17:47:00.939653 systemd-journald[205]: Runtime Journal (/run/log/journal/62273444c394494e9c95f2358aad3a5f) is 8M, max 159M, 151M free. May 27 17:47:00.942597 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:47:00.947551 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 17:47:00.950201 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:47:00.952030 systemd[1]: Finished systemd-fsck-usr.service. May 27 17:47:00.954727 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:47:00.955519 systemd-modules-load[207]: Inserted module 'overlay' May 27 17:47:00.974683 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:47:00.983850 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:47:00.990271 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 17:47:00.990293 kernel: Bridge firewalling registered May 27 17:47:00.987828 systemd-modules-load[207]: Inserted module 'br_netfilter' May 27 17:47:00.994251 systemd-tmpfiles[218]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 17:47:00.994683 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 17:47:01.000821 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:47:01.001208 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:47:01.001433 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:47:01.003151 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:47:01.005359 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:47:01.020983 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:47:01.022802 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:47:01.024164 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:47:01.041271 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:47:01.046687 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 17:47:01.058567 systemd-resolved[234]: Positive Trust Anchors: May 27 17:47:01.058770 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:47:01.058800 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:47:01.061216 systemd-resolved[234]: Defaulting to hostname 'linux'. May 27 17:47:01.061919 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:47:01.063999 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:47:01.087660 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:47:01.135590 kernel: SCSI subsystem initialized May 27 17:47:01.141589 kernel: Loading iSCSI transport class v2.0-870. May 27 17:47:01.149589 kernel: iscsi: registered transport (tcp) May 27 17:47:01.164742 kernel: iscsi: registered transport (qla4xxx) May 27 17:47:01.164781 kernel: QLogic iSCSI HBA Driver May 27 17:47:01.176187 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:47:01.184238 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:47:01.187063 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:47:01.218054 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 17:47:01.221666 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 17:47:01.261593 kernel: raid6: avx512x4 gen() 47824 MB/s May 27 17:47:01.278586 kernel: raid6: avx512x2 gen() 47469 MB/s May 27 17:47:01.295585 kernel: raid6: avx512x1 gen() 29695 MB/s May 27 17:47:01.312588 kernel: raid6: avx2x4 gen() 41783 MB/s May 27 17:47:01.330585 kernel: raid6: avx2x2 gen() 44160 MB/s May 27 17:47:01.348296 kernel: raid6: avx2x1 gen() 33212 MB/s May 27 17:47:01.348311 kernel: raid6: using algorithm avx512x4 gen() 47824 MB/s May 27 17:47:01.367086 kernel: raid6: .... xor() 7787 MB/s, rmw enabled May 27 17:47:01.367115 kernel: raid6: using avx512x2 recovery algorithm May 27 17:47:01.384591 kernel: xor: automatically using best checksumming function avx May 27 17:47:01.486593 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 17:47:01.490367 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 17:47:01.493485 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:47:01.508563 systemd-udevd[453]: Using default interface naming scheme 'v255'. May 27 17:47:01.512201 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:47:01.518822 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 17:47:01.540860 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation May 27 17:47:01.557842 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:47:01.558935 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:47:01.588429 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:47:01.594229 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 17:47:01.644593 kernel: cryptd: max_cpu_qlen set to 1000 May 27 17:47:01.649946 kernel: hv_vmbus: Vmbus version:5.3 May 27 17:47:01.650277 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:47:01.650492 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:47:01.657598 kernel: AES CTR mode by8 optimization enabled May 27 17:47:01.659338 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:47:01.666811 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:47:01.678029 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:47:01.678694 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:47:01.686315 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:47:01.694781 kernel: pps_core: LinuxPPS API ver. 1 registered May 27 17:47:01.694820 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 27 17:47:01.702596 kernel: hv_vmbus: registering driver hyperv_keyboard May 27 17:47:01.705590 kernel: PTP clock support registered May 27 17:47:01.714300 kernel: hv_vmbus: registering driver hv_netvsc May 27 17:47:01.714331 kernel: hv_vmbus: registering driver hv_storvsc May 27 17:47:01.762250 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:47:01.771657 kernel: hv_vmbus: registering driver hv_pci May 27 17:47:01.771675 kernel: hid: raw HID events driver (C) Jiri Kosina May 27 17:47:01.771688 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 27 17:47:01.771701 kernel: scsi host0: storvsc_host_t May 27 17:47:01.773556 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 May 27 17:47:01.778280 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 May 27 17:47:01.778503 kernel: hv_utils: Registering HyperV Utility Driver May 27 17:47:01.778521 kernel: hv_vmbus: registering driver hv_utils May 27 17:47:01.779805 kernel: hv_utils: Shutdown IC version 3.2 May 27 17:47:01.782270 kernel: hv_utils: Heartbeat IC version 3.0 May 27 17:47:01.783993 kernel: hv_utils: TimeSync IC version 4.0 May 27 17:47:01.861057 systemd-resolved[234]: Clock change detected. Flushing caches. May 27 17:47:01.869601 kernel: hv_netvsc f8615163-0000-1000-2000-6045bddf1b08 (unnamed net_device) (uninitialized): VF slot 1 added May 27 17:47:01.869758 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 May 27 17:47:01.875894 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] May 27 17:47:01.876031 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] May 27 17:47:01.881548 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint May 27 17:47:01.885559 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] May 27 17:47:01.889544 kernel: hv_vmbus: registering driver hid_hyperv May 27 17:47:01.897086 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 27 17:47:01.897125 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 27 17:47:01.905549 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) May 27 17:47:01.909552 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 May 27 17:47:01.912580 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned May 27 17:47:01.918587 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 27 17:47:01.918736 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 27 17:47:01.920551 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 27 17:47:01.930328 kernel: nvme nvme0: pci function c05b:00:00.0 May 27 17:47:01.930460 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) May 27 17:47:01.938662 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#189 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 27 17:47:01.952564 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 27 17:47:02.150547 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 27 17:47:02.155550 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 17:47:02.360125 kernel: nvme nvme0: using unchecked data buffer May 27 17:47:02.518463 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. May 27 17:47:02.531057 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. May 27 17:47:02.541740 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. May 27 17:47:02.559864 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 17:47:02.569190 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. May 27 17:47:02.569480 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. May 27 17:47:02.575394 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:47:02.580582 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:47:02.580800 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:47:02.588127 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 17:47:02.594638 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 17:47:02.614508 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 17:47:02.618599 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 17:47:02.629551 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 17:47:02.907961 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 May 27 17:47:02.908120 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 May 27 17:47:02.917347 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] May 27 17:47:02.918798 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] May 27 17:47:02.923705 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint May 27 17:47:02.926625 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] May 27 17:47:02.931619 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] May 27 17:47:02.931646 kernel: pci 7870:00:00.0: enabling Extended Tags May 27 17:47:02.947702 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 May 27 17:47:02.947862 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned May 27 17:47:02.950914 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned May 27 17:47:02.954550 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) May 27 17:47:02.962554 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 May 27 17:47:02.965797 kernel: hv_netvsc f8615163-0000-1000-2000-6045bddf1b08 eth0: VF registering: eth1 May 27 17:47:02.965970 kernel: mana 7870:00:00.0 eth1: joined to eth0 May 27 17:47:02.968549 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 May 27 17:47:03.630515 disk-uuid[682]: The operation has completed successfully. May 27 17:47:03.633633 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 17:47:03.676588 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 17:47:03.676665 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 17:47:03.708970 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 17:47:03.723344 sh[720]: Success May 27 17:47:03.748749 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 17:47:03.748801 kernel: device-mapper: uevent: version 1.0.3 May 27 17:47:03.749721 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 17:47:03.757554 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 27 17:47:03.944200 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 17:47:03.947604 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 17:47:03.959275 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 17:47:03.971240 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 17:47:03.971285 kernel: BTRFS: device fsid 7caef027-0915-4c01-a3d5-28eff70f7ebd devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (733) May 27 17:47:03.973343 kernel: BTRFS info (device dm-0): first mount of filesystem 7caef027-0915-4c01-a3d5-28eff70f7ebd May 27 17:47:03.974603 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 17:47:03.975990 kernel: BTRFS info (device dm-0): using free-space-tree May 27 17:47:04.219367 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 17:47:04.223816 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 17:47:04.227289 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 17:47:04.227896 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 17:47:04.233642 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 17:47:04.279567 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:10) scanned by mount (766) May 27 17:47:04.279603 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:47:04.283478 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 27 17:47:04.285189 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 17:47:04.316686 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:47:04.317006 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 17:47:04.323030 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:47:04.328345 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 17:47:04.333621 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:47:04.355576 systemd-networkd[902]: lo: Link UP May 27 17:47:04.359662 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 May 27 17:47:04.355581 systemd-networkd[902]: lo: Gained carrier May 27 17:47:04.356986 systemd-networkd[902]: Enumeration completed May 27 17:47:04.357306 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:47:04.357309 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:47:04.370483 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 May 27 17:47:04.370645 kernel: hv_netvsc f8615163-0000-1000-2000-6045bddf1b08 eth0: Data path switched to VF: enP30832s1 May 27 17:47:04.357358 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:47:04.367030 systemd[1]: Reached target network.target - Network. May 27 17:47:04.368219 systemd-networkd[902]: enP30832s1: Link UP May 27 17:47:04.368278 systemd-networkd[902]: eth0: Link UP May 27 17:47:04.368677 systemd-networkd[902]: eth0: Gained carrier May 27 17:47:04.368686 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:47:04.372014 systemd-networkd[902]: enP30832s1: Gained carrier May 27 17:47:04.382584 systemd-networkd[902]: eth0: DHCPv4 address 10.200.8.19/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 27 17:47:05.043844 ignition[901]: Ignition 2.21.0 May 27 17:47:05.043855 ignition[901]: Stage: fetch-offline May 27 17:47:05.045894 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:47:05.043928 ignition[901]: no configs at "/usr/lib/ignition/base.d" May 27 17:47:05.048685 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 17:47:05.043934 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:47:05.044004 ignition[901]: parsed url from cmdline: "" May 27 17:47:05.044007 ignition[901]: no config URL provided May 27 17:47:05.044010 ignition[901]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:47:05.044015 ignition[901]: no config at "/usr/lib/ignition/user.ign" May 27 17:47:05.044018 ignition[901]: failed to fetch config: resource requires networking May 27 17:47:05.044158 ignition[901]: Ignition finished successfully May 27 17:47:05.075101 ignition[912]: Ignition 2.21.0 May 27 17:47:05.075124 ignition[912]: Stage: fetch May 27 17:47:05.075307 ignition[912]: no configs at "/usr/lib/ignition/base.d" May 27 17:47:05.075313 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:47:05.075373 ignition[912]: parsed url from cmdline: "" May 27 17:47:05.075375 ignition[912]: no config URL provided May 27 17:47:05.075379 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:47:05.075383 ignition[912]: no config at "/usr/lib/ignition/user.ign" May 27 17:47:05.075411 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 27 17:47:05.138763 ignition[912]: GET result: OK May 27 17:47:05.138827 ignition[912]: config has been read from IMDS userdata May 27 17:47:05.138849 ignition[912]: parsing config with SHA512: cf8b5793718686e8522ed380f3365d6f38157abd2386b7dbdf08f8f801ebf4fb996f4181e3ecf94a444673544521a749940a9cfd94cf8fdb267c00dc73bca080 May 27 17:47:05.141957 unknown[912]: fetched base config from "system" May 27 17:47:05.142003 unknown[912]: fetched base config from "system" May 27 17:47:05.142706 ignition[912]: fetch: fetch complete May 27 17:47:05.142007 unknown[912]: fetched user config from "azure" May 27 17:47:05.142710 ignition[912]: fetch: fetch passed May 27 17:47:05.144109 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 17:47:05.142742 ignition[912]: Ignition finished successfully May 27 17:47:05.147376 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 17:47:05.166515 ignition[918]: Ignition 2.21.0 May 27 17:47:05.166524 ignition[918]: Stage: kargs May 27 17:47:05.166689 ignition[918]: no configs at "/usr/lib/ignition/base.d" May 27 17:47:05.168573 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 17:47:05.166695 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:47:05.172838 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 17:47:05.167297 ignition[918]: kargs: kargs passed May 27 17:47:05.167325 ignition[918]: Ignition finished successfully May 27 17:47:05.192195 ignition[924]: Ignition 2.21.0 May 27 17:47:05.192203 ignition[924]: Stage: disks May 27 17:47:05.194172 ignition[924]: no configs at "/usr/lib/ignition/base.d" May 27 17:47:05.194192 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:47:05.196248 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 17:47:05.195299 ignition[924]: disks: disks passed May 27 17:47:05.201046 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 17:47:05.195333 ignition[924]: Ignition finished successfully May 27 17:47:05.203724 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 17:47:05.206755 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:47:05.210443 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:47:05.211418 systemd[1]: Reached target basic.target - Basic System. May 27 17:47:05.214158 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 17:47:05.280243 systemd-fsck[932]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks May 27 17:47:05.283927 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 17:47:05.289343 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 17:47:05.523544 kernel: EXT4-fs (nvme0n1p9): mounted filesystem bf93e767-f532-4480-b210-a196f7ac181e r/w with ordered data mode. Quota mode: none. May 27 17:47:05.523906 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 17:47:05.524398 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 17:47:05.538497 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:47:05.541484 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 17:47:05.549645 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 27 17:47:05.554463 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 17:47:05.554489 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:47:05.577318 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:10) scanned by mount (941) May 27 17:47:05.577344 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:47:05.577357 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 27 17:47:05.577369 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 17:47:05.568449 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 17:47:05.577607 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 17:47:05.586237 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:47:05.655752 systemd-networkd[902]: enP30832s1: Gained IPv6LL May 27 17:47:05.937458 coreos-metadata[943]: May 27 17:47:05.937 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 27 17:47:05.940492 coreos-metadata[943]: May 27 17:47:05.940 INFO Fetch successful May 27 17:47:05.942588 coreos-metadata[943]: May 27 17:47:05.941 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 27 17:47:05.950201 coreos-metadata[943]: May 27 17:47:05.950 INFO Fetch successful May 27 17:47:05.962153 coreos-metadata[943]: May 27 17:47:05.962 INFO wrote hostname ci-4344.0.0-a-92788821a5 to /sysroot/etc/hostname May 27 17:47:05.965021 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 27 17:47:06.046628 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory May 27 17:47:06.075633 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory May 27 17:47:06.092711 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory May 27 17:47:06.096289 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory May 27 17:47:06.231684 systemd-networkd[902]: eth0: Gained IPv6LL May 27 17:47:06.746316 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 17:47:06.751602 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 17:47:06.760629 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 17:47:06.768210 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 17:47:06.770437 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:47:06.789815 ignition[1059]: INFO : Ignition 2.21.0 May 27 17:47:06.789815 ignition[1059]: INFO : Stage: mount May 27 17:47:06.789815 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:47:06.789815 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:47:06.796934 ignition[1059]: INFO : mount: mount passed May 27 17:47:06.796934 ignition[1059]: INFO : Ignition finished successfully May 27 17:47:06.797041 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 17:47:06.798127 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 17:47:06.800597 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 17:47:06.814857 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:47:06.835547 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:10) scanned by mount (1072) May 27 17:47:06.837668 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:47:06.837766 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 27 17:47:06.838566 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 17:47:06.842511 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:47:06.866010 ignition[1088]: INFO : Ignition 2.21.0 May 27 17:47:06.867375 ignition[1088]: INFO : Stage: files May 27 17:47:06.867375 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:47:06.867375 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:47:06.873858 ignition[1088]: DEBUG : files: compiled without relabeling support, skipping May 27 17:47:06.873858 ignition[1088]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 17:47:06.873858 ignition[1088]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 17:47:06.892175 ignition[1088]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 17:47:06.895613 ignition[1088]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 17:47:06.895613 ignition[1088]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 17:47:06.894331 unknown[1088]: wrote ssh authorized keys file for user: core May 27 17:47:06.899801 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 17:47:06.899801 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 27 17:47:07.182313 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 17:47:07.431151 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 17:47:07.435658 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 17:47:07.435658 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 17:47:07.435658 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 17:47:07.435658 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 17:47:07.435658 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:47:07.435658 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:47:07.435658 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:47:07.435658 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:47:07.463766 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:47:07.463766 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:47:07.463766 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 17:47:07.463766 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 17:47:07.463766 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 17:47:07.463766 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 27 17:47:08.066548 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 17:47:08.881809 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 17:47:08.881809 ignition[1088]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 17:47:08.895604 ignition[1088]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:47:08.905253 ignition[1088]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:47:08.905253 ignition[1088]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 17:47:08.912469 ignition[1088]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 27 17:47:08.912469 ignition[1088]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 27 17:47:08.912469 ignition[1088]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 17:47:08.912469 ignition[1088]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 17:47:08.912469 ignition[1088]: INFO : files: files passed May 27 17:47:08.912469 ignition[1088]: INFO : Ignition finished successfully May 27 17:47:08.909401 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 17:47:08.913331 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 17:47:08.931308 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 17:47:08.937248 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 17:47:08.938122 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 17:47:08.945850 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:47:08.948361 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:47:08.948361 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 17:47:08.953001 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:47:08.953355 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 17:47:08.958427 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 17:47:08.997638 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 17:47:08.997715 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 17:47:09.001854 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 17:47:09.005407 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 17:47:09.005786 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 17:47:09.006432 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 17:47:09.025718 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:47:09.028649 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 17:47:09.046916 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 17:47:09.047826 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:47:09.048002 systemd[1]: Stopped target timers.target - Timer Units. May 27 17:47:09.048250 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 17:47:09.048337 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:47:09.060640 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 17:47:09.064923 systemd[1]: Stopped target basic.target - Basic System. May 27 17:47:09.067139 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 17:47:09.070767 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:47:09.071096 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 17:47:09.077098 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 17:47:09.080856 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 17:47:09.085121 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:47:09.089519 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 17:47:09.091815 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 17:47:09.096315 systemd[1]: Stopped target swap.target - Swaps. May 27 17:47:09.098180 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 17:47:09.099296 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 17:47:09.101799 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 17:47:09.104672 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:47:09.108630 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 17:47:09.108973 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:47:09.113615 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 17:47:09.113706 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 17:47:09.120043 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 17:47:09.120135 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:47:09.121804 systemd[1]: ignition-files.service: Deactivated successfully. May 27 17:47:09.121886 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 17:47:09.122133 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 27 17:47:09.122207 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 27 17:47:09.132230 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 17:47:09.145687 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 17:47:09.147656 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 17:47:09.148013 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:47:09.156473 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 17:47:09.157027 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:47:09.167314 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 17:47:09.167463 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 17:47:09.170682 ignition[1143]: INFO : Ignition 2.21.0 May 27 17:47:09.170682 ignition[1143]: INFO : Stage: umount May 27 17:47:09.170682 ignition[1143]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:47:09.170682 ignition[1143]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:47:09.170682 ignition[1143]: INFO : umount: umount passed May 27 17:47:09.170682 ignition[1143]: INFO : Ignition finished successfully May 27 17:47:09.179913 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 17:47:09.179985 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 17:47:09.183056 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 17:47:09.183116 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 17:47:09.183389 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 17:47:09.183417 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 17:47:09.184012 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 17:47:09.184038 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 17:47:09.184321 systemd[1]: Stopped target network.target - Network. May 27 17:47:09.184341 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 17:47:09.184366 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:47:09.184963 systemd[1]: Stopped target paths.target - Path Units. May 27 17:47:09.184980 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 17:47:09.190949 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:47:09.192970 systemd[1]: Stopped target slices.target - Slice Units. May 27 17:47:09.211169 systemd[1]: Stopped target sockets.target - Socket Units. May 27 17:47:09.213595 systemd[1]: iscsid.socket: Deactivated successfully. May 27 17:47:09.213628 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:47:09.214312 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 17:47:09.214334 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:47:09.214514 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 17:47:09.214557 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 17:47:09.214781 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 17:47:09.214807 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 17:47:09.215103 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 17:47:09.223502 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 17:47:09.228815 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 17:47:09.228894 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 17:47:09.233925 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 17:47:09.234071 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 17:47:09.234135 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 17:47:09.283629 kernel: hv_netvsc f8615163-0000-1000-2000-6045bddf1b08 eth0: Data path switched from VF: enP30832s1 May 27 17:47:09.283785 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 May 27 17:47:09.236931 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 17:47:09.237462 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 17:47:09.239920 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 17:47:09.239954 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 17:47:09.244126 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 17:47:09.244555 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 17:47:09.244596 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:47:09.244928 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:47:09.244956 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:47:09.248497 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 17:47:09.248551 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 17:47:09.248733 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 17:47:09.248762 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:47:09.249492 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:47:09.250559 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 17:47:09.250603 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 17:47:09.265803 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 17:47:09.270494 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:47:09.272159 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 17:47:09.272209 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 17:47:09.272388 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 17:47:09.272412 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:47:09.272650 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 17:47:09.272685 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 17:47:09.272952 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 17:47:09.272988 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 17:47:09.273272 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 17:47:09.273307 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:47:09.277411 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 17:47:09.287028 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 17:47:09.287074 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:47:09.291339 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 17:47:09.291381 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:47:09.297365 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 17:47:09.297410 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:47:09.300634 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 17:47:09.300665 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:47:09.305182 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:47:09.305241 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:47:09.311417 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 17:47:09.311463 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 17:47:09.311485 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 27 17:47:09.311502 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 17:47:09.311523 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:47:09.311870 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 17:47:09.311922 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 17:47:09.317491 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 17:47:09.317576 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 17:47:09.333923 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 17:47:09.334000 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 17:47:09.337453 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 17:47:09.341582 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 17:47:09.341635 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 17:47:09.344794 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 17:47:09.363067 systemd[1]: Switching root. May 27 17:47:09.418523 systemd-journald[205]: Journal stopped May 27 17:47:12.931993 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). May 27 17:47:12.932020 kernel: SELinux: policy capability network_peer_controls=1 May 27 17:47:12.932028 kernel: SELinux: policy capability open_perms=1 May 27 17:47:12.932034 kernel: SELinux: policy capability extended_socket_class=1 May 27 17:47:12.932038 kernel: SELinux: policy capability always_check_network=0 May 27 17:47:12.932043 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 17:47:12.932050 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 17:47:12.932055 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 17:47:12.932060 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 17:47:12.932064 kernel: SELinux: policy capability userspace_initial_context=0 May 27 17:47:12.932069 kernel: audit: type=1403 audit(1748368030.497:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 17:47:12.932075 systemd[1]: Successfully loaded SELinux policy in 109.478ms. May 27 17:47:12.932081 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.015ms. May 27 17:47:12.932089 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:47:12.932095 systemd[1]: Detected virtualization microsoft. May 27 17:47:12.932100 systemd[1]: Detected architecture x86-64. May 27 17:47:12.932106 systemd[1]: Detected first boot. May 27 17:47:12.932111 systemd[1]: Hostname set to . May 27 17:47:12.932118 systemd[1]: Initializing machine ID from random generator. May 27 17:47:12.932124 zram_generator::config[1186]: No configuration found. May 27 17:47:12.932130 kernel: Guest personality initialized and is inactive May 27 17:47:12.932135 kernel: VMCI host device registered (name=vmci, major=10, minor=124) May 27 17:47:12.932140 kernel: Initialized host personality May 27 17:47:12.932145 kernel: NET: Registered PF_VSOCK protocol family May 27 17:47:12.932151 systemd[1]: Populated /etc with preset unit settings. May 27 17:47:12.932158 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 17:47:12.932163 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 17:47:12.932168 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 17:47:12.932174 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 17:47:12.932179 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 17:47:12.932186 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 17:47:12.932193 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 17:47:12.932202 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 17:47:12.932211 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 17:47:12.932220 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 17:47:12.932228 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 17:47:12.932237 systemd[1]: Created slice user.slice - User and Session Slice. May 27 17:47:12.932245 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:47:12.932254 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:47:12.932262 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 17:47:12.932269 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 17:47:12.932276 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 17:47:12.932282 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:47:12.932288 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 17:47:12.932293 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:47:12.932299 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:47:12.932304 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 17:47:12.932310 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 17:47:12.932316 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 17:47:12.932322 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 17:47:12.932327 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:47:12.932334 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:47:12.932339 systemd[1]: Reached target slices.target - Slice Units. May 27 17:47:12.932345 systemd[1]: Reached target swap.target - Swaps. May 27 17:47:12.932350 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 17:47:12.932356 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 17:47:12.932363 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 17:47:12.932369 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:47:12.932374 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:47:12.932380 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:47:12.932385 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 17:47:12.932392 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 17:47:12.932397 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 17:47:12.932403 systemd[1]: Mounting media.mount - External Media Directory... May 27 17:47:12.932408 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:47:12.932414 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 17:47:12.932420 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 17:47:12.932425 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 17:47:12.932431 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 17:47:12.932438 systemd[1]: Reached target machines.target - Containers. May 27 17:47:12.932444 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 17:47:12.932450 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:47:12.932457 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:47:12.932463 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 17:47:12.932468 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:47:12.932474 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:47:12.932480 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:47:12.932486 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 17:47:12.932492 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:47:12.932498 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 17:47:12.932503 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 17:47:12.932509 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 17:47:12.932514 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 17:47:12.932520 systemd[1]: Stopped systemd-fsck-usr.service. May 27 17:47:12.932526 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:47:12.932574 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:47:12.932583 kernel: fuse: init (API version 7.41) May 27 17:47:12.932590 kernel: loop: module loaded May 27 17:47:12.932598 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:47:12.932606 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:47:12.932614 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 17:47:12.932623 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 17:47:12.932631 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:47:12.932639 systemd[1]: verity-setup.service: Deactivated successfully. May 27 17:47:12.932649 systemd[1]: Stopped verity-setup.service. May 27 17:47:12.932658 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:47:12.932666 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 17:47:12.932674 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 17:47:12.932683 systemd[1]: Mounted media.mount - External Media Directory. May 27 17:47:12.932711 systemd-journald[1293]: Collecting audit messages is disabled. May 27 17:47:12.932732 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 17:47:12.932742 systemd-journald[1293]: Journal started May 27 17:47:12.932762 systemd-journald[1293]: Runtime Journal (/run/log/journal/0c75d9a2dff74478912e8418d21d4beb) is 8M, max 159M, 151M free. May 27 17:47:12.572675 systemd[1]: Queued start job for default target multi-user.target. May 27 17:47:12.581801 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 27 17:47:12.582130 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 17:47:12.935567 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:47:12.936453 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 17:47:12.937827 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 17:47:12.939401 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 17:47:12.943259 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:47:12.945852 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 17:47:12.945979 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 17:47:12.948714 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:47:12.948865 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:47:12.951108 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:47:12.951240 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:47:12.953970 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 17:47:12.954098 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 17:47:12.956553 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:47:12.956701 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:47:12.958970 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:47:12.961375 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:47:12.964855 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 17:47:12.967633 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 17:47:12.983604 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:47:12.995690 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 17:47:13.005742 kernel: ACPI: bus type drm_connector registered May 27 17:47:13.003607 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 17:47:13.008594 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 17:47:13.008616 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:47:13.011320 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 17:47:13.015304 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 17:47:13.017324 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:47:13.028091 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 17:47:13.030693 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 17:47:13.032925 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:47:13.034623 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 17:47:13.036354 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:47:13.043114 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:47:13.046624 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 17:47:13.052170 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:47:13.055844 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:47:13.056713 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:47:13.060978 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:47:13.065410 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 17:47:13.067263 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 17:47:13.069800 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 17:47:13.072768 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 17:47:13.077505 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 17:47:13.085919 systemd-journald[1293]: Time spent on flushing to /var/log/journal/0c75d9a2dff74478912e8418d21d4beb is 17.536ms for 992 entries. May 27 17:47:13.085919 systemd-journald[1293]: System Journal (/var/log/journal/0c75d9a2dff74478912e8418d21d4beb) is 8M, max 2.6G, 2.6G free. May 27 17:47:13.205938 systemd-journald[1293]: Received client request to flush runtime journal. May 27 17:47:13.205971 kernel: loop0: detected capacity change from 0 to 146240 May 27 17:47:13.099813 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:47:13.206804 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 17:47:13.213966 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. May 27 17:47:13.213979 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. May 27 17:47:13.217768 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:47:13.220822 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 17:47:13.229474 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 17:47:13.233833 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 17:47:13.311605 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 17:47:13.315340 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:47:13.331359 systemd-tmpfiles[1346]: ACLs are not supported, ignoring. May 27 17:47:13.331584 systemd-tmpfiles[1346]: ACLs are not supported, ignoring. May 27 17:47:13.334105 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:47:13.457552 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 17:47:13.487551 kernel: loop1: detected capacity change from 0 to 113872 May 27 17:47:13.867551 kernel: loop2: detected capacity change from 0 to 28496 May 27 17:47:13.872752 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 17:47:13.876504 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:47:13.901588 systemd-udevd[1354]: Using default interface naming scheme 'v255'. May 27 17:47:14.024765 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:47:14.029644 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:47:14.092025 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 17:47:14.146145 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 17:47:14.165640 kernel: loop3: detected capacity change from 0 to 229808 May 27 17:47:14.171450 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 17:47:14.212572 kernel: hv_vmbus: registering driver hyperv_fb May 27 17:47:14.215553 kernel: loop4: detected capacity change from 0 to 146240 May 27 17:47:14.218564 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 27 17:47:14.222772 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#226 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 27 17:47:14.222963 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 27 17:47:14.223571 kernel: Console: switching to colour dummy device 80x25 May 27 17:47:14.228114 kernel: Console: switching to colour frame buffer device 128x48 May 27 17:47:14.242578 kernel: loop5: detected capacity change from 0 to 113872 May 27 17:47:14.252553 kernel: mousedev: PS/2 mouse device common for all mice May 27 17:47:14.263956 kernel: loop6: detected capacity change from 0 to 28496 May 27 17:47:14.264216 systemd-networkd[1358]: lo: Link UP May 27 17:47:14.264393 systemd-networkd[1358]: lo: Gained carrier May 27 17:47:14.269162 systemd-networkd[1358]: Enumeration completed May 27 17:47:14.269508 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:47:14.269795 systemd-networkd[1358]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:47:14.269927 systemd-networkd[1358]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:47:14.272566 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 May 27 17:47:14.273670 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 17:47:14.276573 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 May 27 17:47:14.282435 kernel: hv_netvsc f8615163-0000-1000-2000-6045bddf1b08 eth0: Data path switched to VF: enP30832s1 May 27 17:47:14.279989 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 17:47:14.280822 systemd-networkd[1358]: enP30832s1: Link UP May 27 17:47:14.280878 systemd-networkd[1358]: eth0: Link UP May 27 17:47:14.280880 systemd-networkd[1358]: eth0: Gained carrier May 27 17:47:14.280892 systemd-networkd[1358]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:47:14.284766 systemd-networkd[1358]: enP30832s1: Gained carrier May 27 17:47:14.300007 kernel: loop7: detected capacity change from 0 to 229808 May 27 17:47:14.299577 systemd-networkd[1358]: eth0: DHCPv4 address 10.200.8.19/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 27 17:47:14.303591 kernel: hv_vmbus: registering driver hv_balloon May 27 17:47:14.305564 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 27 17:47:14.320730 (sd-merge)[1406]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 27 17:47:14.325326 (sd-merge)[1406]: Merged extensions into '/usr'. May 27 17:47:14.333027 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 17:47:14.340295 systemd[1]: Reload requested from client PID 1326 ('systemd-sysext') (unit systemd-sysext.service)... May 27 17:47:14.340307 systemd[1]: Reloading... May 27 17:47:14.456367 zram_generator::config[1457]: No configuration found. May 27 17:47:14.547569 kernel: kvm_intel: Using Hyper-V Enlightened VMCS May 27 17:47:14.610699 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:47:14.692205 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. May 27 17:47:14.693932 systemd[1]: Reloading finished in 353 ms. May 27 17:47:14.715065 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 17:47:14.745329 systemd[1]: Starting ensure-sysext.service... May 27 17:47:14.748761 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 17:47:14.752069 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:47:14.760713 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:47:14.778227 systemd[1]: Reload requested from client PID 1530 ('systemctl') (unit ensure-sysext.service)... May 27 17:47:14.778311 systemd[1]: Reloading... May 27 17:47:14.782673 systemd-tmpfiles[1532]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 17:47:14.782700 systemd-tmpfiles[1532]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 17:47:14.782893 systemd-tmpfiles[1532]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 17:47:14.783081 systemd-tmpfiles[1532]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 17:47:14.783706 systemd-tmpfiles[1532]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 17:47:14.783909 systemd-tmpfiles[1532]: ACLs are not supported, ignoring. May 27 17:47:14.783948 systemd-tmpfiles[1532]: ACLs are not supported, ignoring. May 27 17:47:14.800943 systemd-tmpfiles[1532]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:47:14.802589 systemd-tmpfiles[1532]: Skipping /boot May 27 17:47:14.815966 systemd-tmpfiles[1532]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:47:14.816059 systemd-tmpfiles[1532]: Skipping /boot May 27 17:47:14.822581 zram_generator::config[1565]: No configuration found. May 27 17:47:14.910901 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:47:14.996156 systemd[1]: Reloading finished in 217 ms. May 27 17:47:15.017312 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 17:47:15.019548 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:47:15.022820 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:47:15.030839 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:47:15.034067 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 17:47:15.040711 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 17:47:15.043825 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:47:15.047388 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 17:47:15.053686 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:47:15.053841 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:47:15.060652 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:47:15.066796 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:47:15.071366 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:47:15.073044 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:47:15.073163 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:47:15.073253 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:47:15.078959 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:47:15.079134 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:47:15.079263 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:47:15.079332 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:47:15.079405 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:47:15.081281 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 17:47:15.087068 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:47:15.087228 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:47:15.092158 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:47:15.097572 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 17:47:15.100433 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:47:15.101617 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:47:15.104169 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:47:15.104306 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:47:15.107336 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:47:15.108151 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:47:15.111610 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:47:15.116284 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:47:15.118654 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:47:15.118762 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:47:15.118868 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:47:15.118983 systemd[1]: Reached target time-set.target - System Time Set. May 27 17:47:15.120725 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:47:15.124404 systemd[1]: Finished ensure-sysext.service. May 27 17:47:15.131259 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:47:15.131481 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:47:15.135016 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:47:15.135266 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:47:15.137811 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:47:15.162942 systemd-resolved[1636]: Positive Trust Anchors: May 27 17:47:15.162952 systemd-resolved[1636]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:47:15.162981 systemd-resolved[1636]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:47:15.166014 systemd-resolved[1636]: Using system hostname 'ci-4344.0.0-a-92788821a5'. May 27 17:47:15.167775 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:47:15.169196 systemd[1]: Reached target network.target - Network. May 27 17:47:15.170370 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:47:15.175569 augenrules[1671]: No rules May 27 17:47:15.176134 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:47:15.176304 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:47:15.412666 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 17:47:15.415721 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 17:47:15.767641 systemd-networkd[1358]: eth0: Gained IPv6LL May 27 17:47:15.769393 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 17:47:15.772739 systemd[1]: Reached target network-online.target - Network is Online. May 27 17:47:15.895636 systemd-networkd[1358]: enP30832s1: Gained IPv6LL May 27 17:47:16.844819 ldconfig[1321]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 17:47:16.854412 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 17:47:16.856976 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 17:47:16.872984 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 17:47:16.874304 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:47:16.876710 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 17:47:16.879617 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 17:47:16.882599 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 17:47:16.884319 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 17:47:16.885999 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 17:47:16.887724 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 17:47:16.889071 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 17:47:16.889101 systemd[1]: Reached target paths.target - Path Units. May 27 17:47:16.890208 systemd[1]: Reached target timers.target - Timer Units. May 27 17:47:16.893443 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 17:47:16.897381 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 17:47:16.901376 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 17:47:16.903353 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 17:47:16.904779 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 17:47:16.908826 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 17:47:16.911840 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 17:47:16.915024 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 17:47:16.918142 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:47:16.919392 systemd[1]: Reached target basic.target - Basic System. May 27 17:47:16.920362 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 17:47:16.920385 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 17:47:16.922095 systemd[1]: Starting chronyd.service - NTP client/server... May 27 17:47:16.925663 systemd[1]: Starting containerd.service - containerd container runtime... May 27 17:47:16.933328 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 17:47:16.936172 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 17:47:16.939663 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 17:47:16.942633 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 17:47:16.946622 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 17:47:16.949846 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 17:47:16.953563 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 17:47:16.955790 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). May 27 17:47:16.959677 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. May 27 17:47:16.964597 jq[1689]: false May 27 17:47:16.963829 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). May 27 17:47:16.965604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:47:16.971429 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 17:47:16.976657 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 17:47:16.979910 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 17:47:16.983337 KVP[1692]: KVP starting; pid is:1692 May 27 17:47:16.983731 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 17:47:16.987923 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 17:47:16.998700 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 17:47:17.001318 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 17:47:17.001697 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 17:47:17.002222 systemd[1]: Starting update-engine.service - Update Engine... May 27 17:47:17.005932 google_oslogin_nss_cache[1691]: oslogin_cache_refresh[1691]: Refreshing passwd entry cache May 27 17:47:17.005946 oslogin_cache_refresh[1691]: Refreshing passwd entry cache May 27 17:47:17.006958 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 17:47:17.012550 kernel: hv_utils: KVP IC version 4.0 May 27 17:47:17.012896 KVP[1692]: KVP LIC Version: 3.1 May 27 17:47:17.020133 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 17:47:17.023167 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 17:47:17.023328 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 17:47:17.027179 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 17:47:17.027592 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 17:47:17.029632 (chronyd)[1684]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 27 17:47:17.046643 jq[1705]: true May 27 17:47:17.049347 chronyd[1730]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 27 17:47:17.053092 extend-filesystems[1690]: Found loop4 May 27 17:47:17.056945 extend-filesystems[1690]: Found loop5 May 27 17:47:17.056945 extend-filesystems[1690]: Found loop6 May 27 17:47:17.056945 extend-filesystems[1690]: Found loop7 May 27 17:47:17.056945 extend-filesystems[1690]: Found sr0 May 27 17:47:17.056945 extend-filesystems[1690]: Found nvme0n1 May 27 17:47:17.056945 extend-filesystems[1690]: Found nvme0n1p1 May 27 17:47:17.075741 extend-filesystems[1690]: Found nvme0n1p2 May 27 17:47:17.075741 extend-filesystems[1690]: Found nvme0n1p3 May 27 17:47:17.075741 extend-filesystems[1690]: Found usr May 27 17:47:17.075741 extend-filesystems[1690]: Found nvme0n1p4 May 27 17:47:17.075741 extend-filesystems[1690]: Found nvme0n1p6 May 27 17:47:17.075741 extend-filesystems[1690]: Found nvme0n1p7 May 27 17:47:17.075741 extend-filesystems[1690]: Found nvme0n1p9 May 27 17:47:17.075741 extend-filesystems[1690]: Checking size of /dev/nvme0n1p9 May 27 17:47:17.067499 systemd[1]: motdgen.service: Deactivated successfully. May 27 17:47:17.085845 chronyd[1730]: Timezone right/UTC failed leap second check, ignoring May 27 17:47:17.107583 update_engine[1704]: I20250527 17:47:17.106525 1704 main.cc:92] Flatcar Update Engine starting May 27 17:47:17.067708 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 17:47:17.085983 chronyd[1730]: Loaded seccomp filter (level 2) May 27 17:47:17.078682 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 17:47:17.107886 jq[1731]: true May 27 17:47:17.078893 (ntainerd)[1720]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 17:47:17.086944 systemd[1]: Started chronyd.service - NTP client/server. May 27 17:47:17.124797 extend-filesystems[1690]: Old size kept for /dev/nvme0n1p9 May 27 17:47:17.133434 dbus-daemon[1687]: [system] SELinux support is enabled May 27 17:47:17.133554 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 17:47:17.138128 tar[1711]: linux-amd64/LICENSE May 27 17:47:17.138297 tar[1711]: linux-amd64/helm May 27 17:47:17.139067 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 17:47:17.139227 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 17:47:17.142480 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 17:47:17.142500 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 17:47:17.144857 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 17:47:17.144872 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 17:47:17.149927 google_oslogin_nss_cache[1691]: oslogin_cache_refresh[1691]: Failure getting users, quitting May 27 17:47:17.149927 google_oslogin_nss_cache[1691]: oslogin_cache_refresh[1691]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 17:47:17.149927 google_oslogin_nss_cache[1691]: oslogin_cache_refresh[1691]: Refreshing group entry cache May 27 17:47:17.149369 oslogin_cache_refresh[1691]: Failure getting users, quitting May 27 17:47:17.149382 oslogin_cache_refresh[1691]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 17:47:17.149416 oslogin_cache_refresh[1691]: Refreshing group entry cache May 27 17:47:17.162584 systemd[1]: Started update-engine.service - Update Engine. May 27 17:47:17.164900 update_engine[1704]: I20250527 17:47:17.162849 1704 update_check_scheduler.cc:74] Next update check in 7m26s May 27 17:47:17.166460 google_oslogin_nss_cache[1691]: oslogin_cache_refresh[1691]: Failure getting groups, quitting May 27 17:47:17.166460 google_oslogin_nss_cache[1691]: oslogin_cache_refresh[1691]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 17:47:17.165008 oslogin_cache_refresh[1691]: Failure getting groups, quitting May 27 17:47:17.165017 oslogin_cache_refresh[1691]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 17:47:17.168699 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 17:47:17.171179 systemd-logind[1703]: New seat seat0. May 27 17:47:17.173186 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 17:47:17.173354 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 17:47:17.179306 systemd-logind[1703]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 17:47:17.179500 systemd[1]: Started systemd-logind.service - User Login Management. May 27 17:47:17.236373 bash[1765]: Updated "/home/core/.ssh/authorized_keys" May 27 17:47:17.236986 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 17:47:17.241962 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 17:47:17.248161 coreos-metadata[1686]: May 27 17:47:17.247 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 27 17:47:17.258696 coreos-metadata[1686]: May 27 17:47:17.258 INFO Fetch successful May 27 17:47:17.258927 coreos-metadata[1686]: May 27 17:47:17.258 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 27 17:47:17.264593 coreos-metadata[1686]: May 27 17:47:17.264 INFO Fetch successful May 27 17:47:17.264923 coreos-metadata[1686]: May 27 17:47:17.264 INFO Fetching http://168.63.129.16/machine/b91eb2b2-4fcd-4b0e-a348-62643743058c/c17df8d3%2D01fa%2D42ce%2D86d7%2Dfe04df888553.%5Fci%2D4344.0.0%2Da%2D92788821a5?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 27 17:47:17.266294 coreos-metadata[1686]: May 27 17:47:17.266 INFO Fetch successful May 27 17:47:17.266516 coreos-metadata[1686]: May 27 17:47:17.266 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 27 17:47:17.275598 coreos-metadata[1686]: May 27 17:47:17.275 INFO Fetch successful May 27 17:47:17.364029 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 17:47:17.366333 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 17:47:17.485805 locksmithd[1758]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 17:47:17.782058 sshd_keygen[1735]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 17:47:17.809365 containerd[1720]: time="2025-05-27T17:47:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 17:47:17.812068 containerd[1720]: time="2025-05-27T17:47:17.812027128Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 17:47:17.834125 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 17:47:17.838788 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 17:47:17.841492 containerd[1720]: time="2025-05-27T17:47:17.839064414Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.165µs" May 27 17:47:17.841492 containerd[1720]: time="2025-05-27T17:47:17.839088972Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 17:47:17.841492 containerd[1720]: time="2025-05-27T17:47:17.839105492Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 17:47:17.841492 containerd[1720]: time="2025-05-27T17:47:17.839217006Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 17:47:17.841492 containerd[1720]: time="2025-05-27T17:47:17.839228374Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 17:47:17.841492 containerd[1720]: time="2025-05-27T17:47:17.839247301Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:47:17.841492 containerd[1720]: time="2025-05-27T17:47:17.839287645Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:47:17.841492 containerd[1720]: time="2025-05-27T17:47:17.839296486Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:47:17.841492 containerd[1720]: time="2025-05-27T17:47:17.839486156Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:47:17.841492 containerd[1720]: time="2025-05-27T17:47:17.839497005Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:47:17.841492 containerd[1720]: time="2025-05-27T17:47:17.839505426Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:47:17.841492 containerd[1720]: time="2025-05-27T17:47:17.839512064Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 17:47:17.841824 containerd[1720]: time="2025-05-27T17:47:17.839587233Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 17:47:17.841824 containerd[1720]: time="2025-05-27T17:47:17.839725368Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:47:17.841824 containerd[1720]: time="2025-05-27T17:47:17.839746089Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:47:17.841824 containerd[1720]: time="2025-05-27T17:47:17.839754393Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 17:47:17.841824 containerd[1720]: time="2025-05-27T17:47:17.839775374Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 17:47:17.841824 containerd[1720]: time="2025-05-27T17:47:17.839955726Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 17:47:17.841824 containerd[1720]: time="2025-05-27T17:47:17.839990684Z" level=info msg="metadata content store policy set" policy=shared May 27 17:47:17.842027 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 27 17:47:17.853983 containerd[1720]: time="2025-05-27T17:47:17.853163521Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 17:47:17.853983 containerd[1720]: time="2025-05-27T17:47:17.853219369Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 17:47:17.853983 containerd[1720]: time="2025-05-27T17:47:17.853234052Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 17:47:17.853983 containerd[1720]: time="2025-05-27T17:47:17.853244929Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 17:47:17.853983 containerd[1720]: time="2025-05-27T17:47:17.853256574Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 17:47:17.853983 containerd[1720]: time="2025-05-27T17:47:17.853288660Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 17:47:17.853983 containerd[1720]: time="2025-05-27T17:47:17.853300783Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 17:47:17.853983 containerd[1720]: time="2025-05-27T17:47:17.853311986Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 17:47:17.853983 containerd[1720]: time="2025-05-27T17:47:17.853322979Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 17:47:17.853983 containerd[1720]: time="2025-05-27T17:47:17.853335397Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 17:47:17.853983 containerd[1720]: time="2025-05-27T17:47:17.853343094Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 17:47:17.853983 containerd[1720]: time="2025-05-27T17:47:17.853353438Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 17:47:17.853983 containerd[1720]: time="2025-05-27T17:47:17.853442288Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 17:47:17.853983 containerd[1720]: time="2025-05-27T17:47:17.853456766Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 17:47:17.854261 containerd[1720]: time="2025-05-27T17:47:17.853473310Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 17:47:17.854261 containerd[1720]: time="2025-05-27T17:47:17.853483771Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 17:47:17.854261 containerd[1720]: time="2025-05-27T17:47:17.853492260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 17:47:17.854261 containerd[1720]: time="2025-05-27T17:47:17.853502600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 17:47:17.854261 containerd[1720]: time="2025-05-27T17:47:17.853513881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 17:47:17.854261 containerd[1720]: time="2025-05-27T17:47:17.853522646Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 17:47:17.854261 containerd[1720]: time="2025-05-27T17:47:17.853543319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 17:47:17.854261 containerd[1720]: time="2025-05-27T17:47:17.853553307Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 17:47:17.854261 containerd[1720]: time="2025-05-27T17:47:17.853562389Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 17:47:17.854261 containerd[1720]: time="2025-05-27T17:47:17.853618635Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 17:47:17.854261 containerd[1720]: time="2025-05-27T17:47:17.853634014Z" level=info msg="Start snapshots syncer" May 27 17:47:17.854261 containerd[1720]: time="2025-05-27T17:47:17.853659192Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 17:47:17.854472 containerd[1720]: time="2025-05-27T17:47:17.853891368Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 17:47:17.854472 containerd[1720]: time="2025-05-27T17:47:17.853928445Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 17:47:17.854627 containerd[1720]: time="2025-05-27T17:47:17.853991714Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 17:47:17.854627 containerd[1720]: time="2025-05-27T17:47:17.854085435Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 17:47:17.854627 containerd[1720]: time="2025-05-27T17:47:17.854100759Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 17:47:17.854627 containerd[1720]: time="2025-05-27T17:47:17.854110005Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 17:47:17.854627 containerd[1720]: time="2025-05-27T17:47:17.854119277Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 17:47:17.854627 containerd[1720]: time="2025-05-27T17:47:17.854128851Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 17:47:17.854627 containerd[1720]: time="2025-05-27T17:47:17.854141301Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 17:47:17.854627 containerd[1720]: time="2025-05-27T17:47:17.854151734Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 17:47:17.854627 containerd[1720]: time="2025-05-27T17:47:17.854170857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 17:47:17.854627 containerd[1720]: time="2025-05-27T17:47:17.854179799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 17:47:17.854627 containerd[1720]: time="2025-05-27T17:47:17.854188840Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 17:47:17.854627 containerd[1720]: time="2025-05-27T17:47:17.854220030Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:47:17.854627 containerd[1720]: time="2025-05-27T17:47:17.854232237Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:47:17.854627 containerd[1720]: time="2025-05-27T17:47:17.854239713Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:47:17.854883 containerd[1720]: time="2025-05-27T17:47:17.854248469Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:47:17.854883 containerd[1720]: time="2025-05-27T17:47:17.854255154Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 17:47:17.854883 containerd[1720]: time="2025-05-27T17:47:17.854263430Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 17:47:17.854883 containerd[1720]: time="2025-05-27T17:47:17.854303799Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 17:47:17.854883 containerd[1720]: time="2025-05-27T17:47:17.854313848Z" level=info msg="runtime interface created" May 27 17:47:17.854883 containerd[1720]: time="2025-05-27T17:47:17.854318228Z" level=info msg="created NRI interface" May 27 17:47:17.854883 containerd[1720]: time="2025-05-27T17:47:17.854326086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 17:47:17.854883 containerd[1720]: time="2025-05-27T17:47:17.854335014Z" level=info msg="Connect containerd service" May 27 17:47:17.854883 containerd[1720]: time="2025-05-27T17:47:17.854354687Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 17:47:17.862316 containerd[1720]: time="2025-05-27T17:47:17.856720908Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:47:17.872731 systemd[1]: issuegen.service: Deactivated successfully. May 27 17:47:17.872902 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 17:47:17.881047 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 17:47:17.888845 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 27 17:47:17.901943 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 17:47:17.905568 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 17:47:17.908945 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 17:47:17.913435 systemd[1]: Reached target getty.target - Login Prompts. May 27 17:47:17.983825 tar[1711]: linux-amd64/README.md May 27 17:47:17.999886 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 17:47:18.516418 containerd[1720]: time="2025-05-27T17:47:18.516370616Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 17:47:18.516620 containerd[1720]: time="2025-05-27T17:47:18.516530650Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 17:47:18.516620 containerd[1720]: time="2025-05-27T17:47:18.516404589Z" level=info msg="Start subscribing containerd event" May 27 17:47:18.516827 containerd[1720]: time="2025-05-27T17:47:18.516753788Z" level=info msg="Start recovering state" May 27 17:47:18.516960 containerd[1720]: time="2025-05-27T17:47:18.516950990Z" level=info msg="Start event monitor" May 27 17:47:18.516992 containerd[1720]: time="2025-05-27T17:47:18.516987415Z" level=info msg="Start cni network conf syncer for default" May 27 17:47:18.517089 containerd[1720]: time="2025-05-27T17:47:18.517034121Z" level=info msg="Start streaming server" May 27 17:47:18.517089 containerd[1720]: time="2025-05-27T17:47:18.517043077Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 17:47:18.517089 containerd[1720]: time="2025-05-27T17:47:18.517050104Z" level=info msg="runtime interface starting up..." May 27 17:47:18.517089 containerd[1720]: time="2025-05-27T17:47:18.517056377Z" level=info msg="starting plugins..." May 27 17:47:18.517089 containerd[1720]: time="2025-05-27T17:47:18.517066647Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 17:47:18.517484 containerd[1720]: time="2025-05-27T17:47:18.517429279Z" level=info msg="containerd successfully booted in 0.708417s" May 27 17:47:18.518673 systemd[1]: Started containerd.service - containerd container runtime. May 27 17:47:18.534498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:47:18.536527 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 17:47:18.541112 systemd[1]: Startup finished in 2.884s (kernel) + 9.607s (initrd) + 8.152s (userspace) = 20.644s. May 27 17:47:18.548698 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:47:18.676669 login[1830]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 27 17:47:18.678264 login[1831]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 27 17:47:18.688565 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 17:47:18.689392 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 17:47:18.693111 systemd-logind[1703]: New session 2 of user core. May 27 17:47:18.699022 systemd-logind[1703]: New session 1 of user core. May 27 17:47:18.708472 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 17:47:18.711279 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 17:47:18.720247 (systemd)[1860]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 17:47:18.723245 systemd-logind[1703]: New session c1 of user core. May 27 17:47:18.886785 systemd[1860]: Queued start job for default target default.target. May 27 17:47:18.897720 systemd[1860]: Created slice app.slice - User Application Slice. May 27 17:47:18.897952 systemd[1860]: Reached target paths.target - Paths. May 27 17:47:18.898020 systemd[1860]: Reached target timers.target - Timers. May 27 17:47:18.899071 systemd[1860]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 17:47:18.908548 systemd[1860]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 17:47:18.908749 systemd[1860]: Reached target sockets.target - Sockets. May 27 17:47:18.908827 systemd[1860]: Reached target basic.target - Basic System. May 27 17:47:18.908935 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 17:47:18.909761 systemd[1860]: Reached target default.target - Main User Target. May 27 17:47:18.909784 systemd[1860]: Startup finished in 181ms. May 27 17:47:18.916701 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 17:47:18.917662 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 17:47:19.097518 waagent[1827]: 2025-05-27T17:47:19.097461Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 May 27 17:47:19.105132 waagent[1827]: 2025-05-27T17:47:19.097705Z INFO Daemon Daemon OS: flatcar 4344.0.0 May 27 17:47:19.105132 waagent[1827]: 2025-05-27T17:47:19.098120Z INFO Daemon Daemon Python: 3.11.12 May 27 17:47:19.105132 waagent[1827]: 2025-05-27T17:47:19.098471Z INFO Daemon Daemon Run daemon May 27 17:47:19.105132 waagent[1827]: 2025-05-27T17:47:19.098773Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.0.0' May 27 17:47:19.105132 waagent[1827]: 2025-05-27T17:47:19.099226Z INFO Daemon Daemon Using waagent for provisioning May 27 17:47:19.105132 waagent[1827]: 2025-05-27T17:47:19.099372Z INFO Daemon Daemon Activate resource disk May 27 17:47:19.105132 waagent[1827]: 2025-05-27T17:47:19.099782Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 27 17:47:19.105132 waagent[1827]: 2025-05-27T17:47:19.101117Z INFO Daemon Daemon Found device: None May 27 17:47:19.105132 waagent[1827]: 2025-05-27T17:47:19.101344Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 27 17:47:19.105132 waagent[1827]: 2025-05-27T17:47:19.101856Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 27 17:47:19.105132 waagent[1827]: 2025-05-27T17:47:19.102284Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 27 17:47:19.105132 waagent[1827]: 2025-05-27T17:47:19.102367Z INFO Daemon Daemon Running default provisioning handler May 27 17:47:19.120391 waagent[1827]: 2025-05-27T17:47:19.120160Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 27 17:47:19.121491 waagent[1827]: 2025-05-27T17:47:19.121453Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 27 17:47:19.122868 waagent[1827]: 2025-05-27T17:47:19.122315Z INFO Daemon Daemon cloud-init is enabled: False May 27 17:47:19.122868 waagent[1827]: 2025-05-27T17:47:19.122377Z INFO Daemon Daemon Copying ovf-env.xml May 27 17:47:19.168340 waagent[1827]: 2025-05-27T17:47:19.168275Z INFO Daemon Daemon Successfully mounted dvd May 27 17:47:19.190701 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 27 17:47:19.193421 waagent[1827]: 2025-05-27T17:47:19.193378Z INFO Daemon Daemon Detect protocol endpoint May 27 17:47:19.194645 waagent[1827]: 2025-05-27T17:47:19.193523Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 27 17:47:19.194645 waagent[1827]: 2025-05-27T17:47:19.193780Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 27 17:47:19.194645 waagent[1827]: 2025-05-27T17:47:19.193835Z INFO Daemon Daemon Test for route to 168.63.129.16 May 27 17:47:19.194645 waagent[1827]: 2025-05-27T17:47:19.193960Z INFO Daemon Daemon Route to 168.63.129.16 exists May 27 17:47:19.194645 waagent[1827]: 2025-05-27T17:47:19.194324Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 27 17:47:19.216574 waagent[1827]: 2025-05-27T17:47:19.216431Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 27 17:47:19.217403 waagent[1827]: 2025-05-27T17:47:19.216848Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 27 17:47:19.217403 waagent[1827]: 2025-05-27T17:47:19.217020Z INFO Daemon Daemon Server preferred version:2015-04-05 May 27 17:47:19.221935 kubelet[1849]: E0527 17:47:19.221908 1849 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:47:19.223351 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:47:19.223459 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:47:19.223701 systemd[1]: kubelet.service: Consumed 875ms CPU time, 268.6M memory peak. May 27 17:47:19.263112 waagent[1827]: 2025-05-27T17:47:19.263067Z INFO Daemon Daemon Initializing goal state during protocol detection May 27 17:47:19.264035 waagent[1827]: 2025-05-27T17:47:19.263598Z INFO Daemon Daemon Forcing an update of the goal state. May 27 17:47:19.266864 waagent[1827]: 2025-05-27T17:47:19.266835Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 27 17:47:19.278324 waagent[1827]: 2025-05-27T17:47:19.278291Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 27 17:47:19.281448 waagent[1827]: 2025-05-27T17:47:19.278754Z INFO Daemon May 27 17:47:19.281448 waagent[1827]: 2025-05-27T17:47:19.278964Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: ad519e8e-0c66-4b5a-bbd0-781622fad251 eTag: 6370097496672229134 source: Fabric] May 27 17:47:19.281448 waagent[1827]: 2025-05-27T17:47:19.279187Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 27 17:47:19.281448 waagent[1827]: 2025-05-27T17:47:19.279452Z INFO Daemon May 27 17:47:19.281448 waagent[1827]: 2025-05-27T17:47:19.279876Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 27 17:47:19.286548 waagent[1827]: 2025-05-27T17:47:19.285953Z INFO Daemon Daemon Downloading artifacts profile blob May 27 17:47:19.367448 waagent[1827]: 2025-05-27T17:47:19.367408Z INFO Daemon Downloaded certificate {'thumbprint': '3C53B0CFF16C7BE8591A71969899FE38E5963E3F', 'hasPrivateKey': True} May 27 17:47:19.369781 waagent[1827]: 2025-05-27T17:47:19.369754Z INFO Daemon Fetch goal state completed May 27 17:47:19.375475 waagent[1827]: 2025-05-27T17:47:19.375447Z INFO Daemon Daemon Starting provisioning May 27 17:47:19.376279 waagent[1827]: 2025-05-27T17:47:19.376214Z INFO Daemon Daemon Handle ovf-env.xml. May 27 17:47:19.377278 waagent[1827]: 2025-05-27T17:47:19.376480Z INFO Daemon Daemon Set hostname [ci-4344.0.0-a-92788821a5] May 27 17:47:19.392914 waagent[1827]: 2025-05-27T17:47:19.392881Z INFO Daemon Daemon Publish hostname [ci-4344.0.0-a-92788821a5] May 27 17:47:19.394142 waagent[1827]: 2025-05-27T17:47:19.394110Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 27 17:47:19.395958 waagent[1827]: 2025-05-27T17:47:19.394879Z INFO Daemon Daemon Primary interface is [eth0] May 27 17:47:19.400672 systemd-networkd[1358]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:47:19.400897 systemd-networkd[1358]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:47:19.400919 systemd-networkd[1358]: eth0: DHCP lease lost May 27 17:47:19.401372 waagent[1827]: 2025-05-27T17:47:19.401331Z INFO Daemon Daemon Create user account if not exists May 27 17:47:19.402641 waagent[1827]: 2025-05-27T17:47:19.402527Z INFO Daemon Daemon User core already exists, skip useradd May 27 17:47:19.402911 waagent[1827]: 2025-05-27T17:47:19.402733Z INFO Daemon Daemon Configure sudoer May 27 17:47:19.407753 waagent[1827]: 2025-05-27T17:47:19.407716Z INFO Daemon Daemon Configure sshd May 27 17:47:19.411380 waagent[1827]: 2025-05-27T17:47:19.411318Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 27 17:47:19.414126 waagent[1827]: 2025-05-27T17:47:19.411485Z INFO Daemon Daemon Deploy ssh public key. May 27 17:47:19.425571 systemd-networkd[1358]: eth0: DHCPv4 address 10.200.8.19/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 27 17:47:20.471230 waagent[1827]: 2025-05-27T17:47:20.471171Z INFO Daemon Daemon Provisioning complete May 27 17:47:20.480883 waagent[1827]: 2025-05-27T17:47:20.480855Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 27 17:47:20.485753 waagent[1827]: 2025-05-27T17:47:20.481019Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 27 17:47:20.485753 waagent[1827]: 2025-05-27T17:47:20.481190Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent May 27 17:47:20.570515 waagent[1913]: 2025-05-27T17:47:20.570449Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) May 27 17:47:20.570740 waagent[1913]: 2025-05-27T17:47:20.570546Z INFO ExtHandler ExtHandler OS: flatcar 4344.0.0 May 27 17:47:20.570740 waagent[1913]: 2025-05-27T17:47:20.570589Z INFO ExtHandler ExtHandler Python: 3.11.12 May 27 17:47:20.570740 waagent[1913]: 2025-05-27T17:47:20.570626Z INFO ExtHandler ExtHandler CPU Arch: x86_64 May 27 17:47:20.587701 waagent[1913]: 2025-05-27T17:47:20.587662Z INFO ExtHandler ExtHandler Distro: flatcar-4344.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; May 27 17:47:20.587824 waagent[1913]: 2025-05-27T17:47:20.587802Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 27 17:47:20.587885 waagent[1913]: 2025-05-27T17:47:20.587854Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 27 17:47:20.592754 waagent[1913]: 2025-05-27T17:47:20.592714Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 27 17:47:20.601286 waagent[1913]: 2025-05-27T17:47:20.601260Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 27 17:47:20.601595 waagent[1913]: 2025-05-27T17:47:20.601570Z INFO ExtHandler May 27 17:47:20.601634 waagent[1913]: 2025-05-27T17:47:20.601620Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 75d21f13-d369-4900-b111-087a213a8cb0 eTag: 6370097496672229134 source: Fabric] May 27 17:47:20.601807 waagent[1913]: 2025-05-27T17:47:20.601785Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 27 17:47:20.602090 waagent[1913]: 2025-05-27T17:47:20.602066Z INFO ExtHandler May 27 17:47:20.602121 waagent[1913]: 2025-05-27T17:47:20.602100Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 27 17:47:20.605131 waagent[1913]: 2025-05-27T17:47:20.605104Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 27 17:47:20.674891 waagent[1913]: 2025-05-27T17:47:20.674848Z INFO ExtHandler Downloaded certificate {'thumbprint': '3C53B0CFF16C7BE8591A71969899FE38E5963E3F', 'hasPrivateKey': True} May 27 17:47:20.675162 waagent[1913]: 2025-05-27T17:47:20.675137Z INFO ExtHandler Fetch goal state completed May 27 17:47:20.687156 waagent[1913]: 2025-05-27T17:47:20.687117Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) May 27 17:47:20.690836 waagent[1913]: 2025-05-27T17:47:20.690794Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1913 May 27 17:47:20.690930 waagent[1913]: 2025-05-27T17:47:20.690895Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 27 17:47:20.691125 waagent[1913]: 2025-05-27T17:47:20.691106Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** May 27 17:47:20.691974 waagent[1913]: 2025-05-27T17:47:20.691948Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.0.0', '', 'Flatcar Container Linux by Kinvolk'] May 27 17:47:20.692226 waagent[1913]: 2025-05-27T17:47:20.692204Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported May 27 17:47:20.692322 waagent[1913]: 2025-05-27T17:47:20.692305Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 27 17:47:20.692695 waagent[1913]: 2025-05-27T17:47:20.692668Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 27 17:47:20.705425 waagent[1913]: 2025-05-27T17:47:20.705404Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 27 17:47:20.705550 waagent[1913]: 2025-05-27T17:47:20.705521Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 27 17:47:20.710246 waagent[1913]: 2025-05-27T17:47:20.710106Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 27 17:47:20.714794 systemd[1]: Reload requested from client PID 1928 ('systemctl') (unit waagent.service)... May 27 17:47:20.714805 systemd[1]: Reloading... May 27 17:47:20.776562 zram_generator::config[1963]: No configuration found. May 27 17:47:20.855442 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:47:20.937580 systemd[1]: Reloading finished in 222 ms. May 27 17:47:20.959782 waagent[1913]: 2025-05-27T17:47:20.959680Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 27 17:47:20.959845 waagent[1913]: 2025-05-27T17:47:20.959780Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 27 17:47:21.119528 waagent[1913]: 2025-05-27T17:47:21.119453Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 27 17:47:21.119737 waagent[1913]: 2025-05-27T17:47:21.119712Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 27 17:47:21.120377 waagent[1913]: 2025-05-27T17:47:21.120347Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 27 17:47:21.120449 waagent[1913]: 2025-05-27T17:47:21.120381Z INFO ExtHandler ExtHandler Starting env monitor service. May 27 17:47:21.120478 waagent[1913]: 2025-05-27T17:47:21.120456Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 27 17:47:21.120658 waagent[1913]: 2025-05-27T17:47:21.120636Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 27 17:47:21.121056 waagent[1913]: 2025-05-27T17:47:21.121015Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 27 17:47:21.121119 waagent[1913]: 2025-05-27T17:47:21.121096Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 27 17:47:21.121463 waagent[1913]: 2025-05-27T17:47:21.121433Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 27 17:47:21.121577 waagent[1913]: 2025-05-27T17:47:21.121524Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 27 17:47:21.121682 waagent[1913]: 2025-05-27T17:47:21.121657Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 27 17:47:21.121682 waagent[1913]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 27 17:47:21.121682 waagent[1913]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 May 27 17:47:21.121682 waagent[1913]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 27 17:47:21.121682 waagent[1913]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 27 17:47:21.121682 waagent[1913]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 27 17:47:21.121682 waagent[1913]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 27 17:47:21.122001 waagent[1913]: 2025-05-27T17:47:21.121966Z INFO EnvHandler ExtHandler Configure routes May 27 17:47:21.122127 waagent[1913]: 2025-05-27T17:47:21.122093Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 27 17:47:21.122430 waagent[1913]: 2025-05-27T17:47:21.122407Z INFO EnvHandler ExtHandler Gateway:None May 27 17:47:21.122502 waagent[1913]: 2025-05-27T17:47:21.122457Z INFO EnvHandler ExtHandler Routes:None May 27 17:47:21.122991 waagent[1913]: 2025-05-27T17:47:21.122927Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 27 17:47:21.123050 waagent[1913]: 2025-05-27T17:47:21.123035Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 27 17:47:21.123119 waagent[1913]: 2025-05-27T17:47:21.123101Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 27 17:47:21.130161 waagent[1913]: 2025-05-27T17:47:21.130134Z INFO ExtHandler ExtHandler May 27 17:47:21.130217 waagent[1913]: 2025-05-27T17:47:21.130184Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 3acc6c55-9275-4376-8fc6-105eff80da3f correlation 8eda6467-02e1-462c-a6fa-c262e87b9151 created: 2025-05-27T17:46:25.498421Z] May 27 17:47:21.130433 waagent[1913]: 2025-05-27T17:47:21.130413Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 27 17:47:21.130812 waagent[1913]: 2025-05-27T17:47:21.130789Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] May 27 17:47:21.153748 waagent[1913]: 2025-05-27T17:47:21.153715Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command May 27 17:47:21.153748 waagent[1913]: Try `iptables -h' or 'iptables --help' for more information.) May 27 17:47:21.154124 waagent[1913]: 2025-05-27T17:47:21.154100Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: E6256D2E-A417-421A-86F6-7A9CC4AAE7CD;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] May 27 17:47:21.155131 waagent[1913]: 2025-05-27T17:47:21.155093Z INFO MonitorHandler ExtHandler Network interfaces: May 27 17:47:21.155131 waagent[1913]: Executing ['ip', '-a', '-o', 'link']: May 27 17:47:21.155131 waagent[1913]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 27 17:47:21.155131 waagent[1913]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:df:1b:08 brd ff:ff:ff:ff:ff:ff\ alias Network Device May 27 17:47:21.155131 waagent[1913]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:df:1b:08 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 May 27 17:47:21.155131 waagent[1913]: Executing ['ip', '-4', '-a', '-o', 'address']: May 27 17:47:21.155131 waagent[1913]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 27 17:47:21.155131 waagent[1913]: 2: eth0 inet 10.200.8.19/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever May 27 17:47:21.155131 waagent[1913]: Executing ['ip', '-6', '-a', '-o', 'address']: May 27 17:47:21.155131 waagent[1913]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 27 17:47:21.155131 waagent[1913]: 2: eth0 inet6 fe80::6245:bdff:fedf:1b08/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 27 17:47:21.155131 waagent[1913]: 3: enP30832s1 inet6 fe80::6245:bdff:fedf:1b08/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 27 17:47:21.212176 waagent[1913]: 2025-05-27T17:47:21.212136Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: May 27 17:47:21.212176 waagent[1913]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 27 17:47:21.212176 waagent[1913]: pkts bytes target prot opt in out source destination May 27 17:47:21.212176 waagent[1913]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 27 17:47:21.212176 waagent[1913]: pkts bytes target prot opt in out source destination May 27 17:47:21.212176 waagent[1913]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 27 17:47:21.212176 waagent[1913]: pkts bytes target prot opt in out source destination May 27 17:47:21.212176 waagent[1913]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 27 17:47:21.212176 waagent[1913]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 27 17:47:21.212176 waagent[1913]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 27 17:47:21.214348 waagent[1913]: 2025-05-27T17:47:21.214309Z INFO EnvHandler ExtHandler Current Firewall rules: May 27 17:47:21.214348 waagent[1913]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 27 17:47:21.214348 waagent[1913]: pkts bytes target prot opt in out source destination May 27 17:47:21.214348 waagent[1913]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 27 17:47:21.214348 waagent[1913]: pkts bytes target prot opt in out source destination May 27 17:47:21.214348 waagent[1913]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 27 17:47:21.214348 waagent[1913]: pkts bytes target prot opt in out source destination May 27 17:47:21.214348 waagent[1913]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 27 17:47:21.214348 waagent[1913]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 27 17:47:21.214348 waagent[1913]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 27 17:47:29.474391 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 17:47:29.476194 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:47:29.998336 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:47:30.006771 (kubelet)[2064]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:47:30.074241 kubelet[2064]: E0527 17:47:30.074194 2064 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:47:30.076854 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:47:30.076970 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:47:30.077289 systemd[1]: kubelet.service: Consumed 118ms CPU time, 108.4M memory peak. May 27 17:47:40.327939 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 17:47:40.329743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:47:40.842381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:47:40.844983 (kubelet)[2080]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:47:40.868924 chronyd[1730]: Selected source PHC0 May 27 17:47:40.925684 kubelet[2080]: E0527 17:47:40.925651 2080 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:47:40.927145 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:47:40.927262 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:47:40.927590 systemd[1]: kubelet.service: Consumed 116ms CPU time, 108.7M memory peak. May 27 17:47:44.360699 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 17:47:44.361692 systemd[1]: Started sshd@0-10.200.8.19:22-10.200.16.10:32978.service - OpenSSH per-connection server daemon (10.200.16.10:32978). May 27 17:47:45.038609 sshd[2088]: Accepted publickey for core from 10.200.16.10 port 32978 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:47:45.039835 sshd-session[2088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:45.044203 systemd-logind[1703]: New session 3 of user core. May 27 17:47:45.051668 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 17:47:45.604952 systemd[1]: Started sshd@1-10.200.8.19:22-10.200.16.10:32986.service - OpenSSH per-connection server daemon (10.200.16.10:32986). May 27 17:47:46.228680 sshd[2093]: Accepted publickey for core from 10.200.16.10 port 32986 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:47:46.229905 sshd-session[2093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:46.234487 systemd-logind[1703]: New session 4 of user core. May 27 17:47:46.249668 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 17:47:46.687000 sshd[2095]: Connection closed by 10.200.16.10 port 32986 May 27 17:47:46.687779 sshd-session[2093]: pam_unix(sshd:session): session closed for user core May 27 17:47:46.690933 systemd[1]: sshd@1-10.200.8.19:22-10.200.16.10:32986.service: Deactivated successfully. May 27 17:47:46.692235 systemd[1]: session-4.scope: Deactivated successfully. May 27 17:47:46.692874 systemd-logind[1703]: Session 4 logged out. Waiting for processes to exit. May 27 17:47:46.693785 systemd-logind[1703]: Removed session 4. May 27 17:47:46.802303 systemd[1]: Started sshd@2-10.200.8.19:22-10.200.16.10:32992.service - OpenSSH per-connection server daemon (10.200.16.10:32992). May 27 17:47:47.432627 sshd[2101]: Accepted publickey for core from 10.200.16.10 port 32992 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:47:47.433886 sshd-session[2101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:47.438064 systemd-logind[1703]: New session 5 of user core. May 27 17:47:47.446666 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 17:47:47.873336 sshd[2103]: Connection closed by 10.200.16.10 port 32992 May 27 17:47:47.874190 sshd-session[2101]: pam_unix(sshd:session): session closed for user core May 27 17:47:47.877026 systemd[1]: sshd@2-10.200.8.19:22-10.200.16.10:32992.service: Deactivated successfully. May 27 17:47:47.878435 systemd[1]: session-5.scope: Deactivated successfully. May 27 17:47:47.879973 systemd-logind[1703]: Session 5 logged out. Waiting for processes to exit. May 27 17:47:47.880693 systemd-logind[1703]: Removed session 5. May 27 17:47:47.990554 systemd[1]: Started sshd@3-10.200.8.19:22-10.200.16.10:32994.service - OpenSSH per-connection server daemon (10.200.16.10:32994). May 27 17:47:48.613261 sshd[2109]: Accepted publickey for core from 10.200.16.10 port 32994 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:47:48.614460 sshd-session[2109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:48.618580 systemd-logind[1703]: New session 6 of user core. May 27 17:47:48.629664 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 17:47:49.055927 sshd[2111]: Connection closed by 10.200.16.10 port 32994 May 27 17:47:49.056603 sshd-session[2109]: pam_unix(sshd:session): session closed for user core May 27 17:47:49.059325 systemd[1]: sshd@3-10.200.8.19:22-10.200.16.10:32994.service: Deactivated successfully. May 27 17:47:49.060687 systemd[1]: session-6.scope: Deactivated successfully. May 27 17:47:49.062153 systemd-logind[1703]: Session 6 logged out. Waiting for processes to exit. May 27 17:47:49.062917 systemd-logind[1703]: Removed session 6. May 27 17:47:49.172334 systemd[1]: Started sshd@4-10.200.8.19:22-10.200.16.10:56260.service - OpenSSH per-connection server daemon (10.200.16.10:56260). May 27 17:47:49.796627 sshd[2117]: Accepted publickey for core from 10.200.16.10 port 56260 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:47:49.797803 sshd-session[2117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:49.802126 systemd-logind[1703]: New session 7 of user core. May 27 17:47:49.806674 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 17:47:50.205434 sudo[2120]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 17:47:50.205789 sudo[2120]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:47:50.226344 sudo[2120]: pam_unix(sudo:session): session closed for user root May 27 17:47:50.328080 sshd[2119]: Connection closed by 10.200.16.10 port 56260 May 27 17:47:50.328742 sshd-session[2117]: pam_unix(sshd:session): session closed for user core May 27 17:47:50.331775 systemd[1]: sshd@4-10.200.8.19:22-10.200.16.10:56260.service: Deactivated successfully. May 27 17:47:50.333288 systemd[1]: session-7.scope: Deactivated successfully. May 27 17:47:50.334943 systemd-logind[1703]: Session 7 logged out. Waiting for processes to exit. May 27 17:47:50.335718 systemd-logind[1703]: Removed session 7. May 27 17:47:50.438356 systemd[1]: Started sshd@5-10.200.8.19:22-10.200.16.10:56268.service - OpenSSH per-connection server daemon (10.200.16.10:56268). May 27 17:47:50.957515 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 27 17:47:50.959377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:47:51.064941 sshd[2126]: Accepted publickey for core from 10.200.16.10 port 56268 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:47:51.065979 sshd-session[2126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:51.070186 systemd-logind[1703]: New session 8 of user core. May 27 17:47:51.077646 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 17:47:51.407757 sudo[2133]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 17:47:51.407967 sudo[2133]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:47:51.456412 sudo[2133]: pam_unix(sudo:session): session closed for user root May 27 17:47:51.461612 sudo[2132]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 17:47:51.461846 sudo[2132]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:47:51.473103 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:47:51.487394 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:47:51.492036 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:47:51.513999 augenrules[2165]: No rules May 27 17:47:51.515031 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:47:51.515265 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:47:51.515968 sudo[2132]: pam_unix(sudo:session): session closed for user root May 27 17:47:51.530370 kubelet[2141]: E0527 17:47:51.530337 2141 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:47:51.531737 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:47:51.531845 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:47:51.532092 systemd[1]: kubelet.service: Consumed 119ms CPU time, 110.8M memory peak. May 27 17:47:51.635650 sshd[2131]: Connection closed by 10.200.16.10 port 56268 May 27 17:47:51.636073 sshd-session[2126]: pam_unix(sshd:session): session closed for user core May 27 17:47:51.639081 systemd[1]: sshd@5-10.200.8.19:22-10.200.16.10:56268.service: Deactivated successfully. May 27 17:47:51.640128 systemd[1]: session-8.scope: Deactivated successfully. May 27 17:47:51.640789 systemd-logind[1703]: Session 8 logged out. Waiting for processes to exit. May 27 17:47:51.641666 systemd-logind[1703]: Removed session 8. May 27 17:47:51.749282 systemd[1]: Started sshd@6-10.200.8.19:22-10.200.16.10:56282.service - OpenSSH per-connection server daemon (10.200.16.10:56282). May 27 17:47:52.373687 sshd[2176]: Accepted publickey for core from 10.200.16.10 port 56282 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:47:52.374859 sshd-session[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:52.379099 systemd-logind[1703]: New session 9 of user core. May 27 17:47:52.384682 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 17:47:52.714593 sudo[2179]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 17:47:52.714785 sudo[2179]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:47:55.872866 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 17:47:55.883806 (dockerd)[2197]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 17:47:56.412182 dockerd[2197]: time="2025-05-27T17:47:56.412137541Z" level=info msg="Starting up" May 27 17:47:56.412814 dockerd[2197]: time="2025-05-27T17:47:56.412787358Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 17:47:56.499146 dockerd[2197]: time="2025-05-27T17:47:56.499025629Z" level=info msg="Loading containers: start." May 27 17:47:56.530610 kernel: Initializing XFRM netlink socket May 27 17:47:56.754564 systemd-networkd[1358]: docker0: Link UP May 27 17:47:56.765576 dockerd[2197]: time="2025-05-27T17:47:56.765552829Z" level=info msg="Loading containers: done." May 27 17:47:56.783622 dockerd[2197]: time="2025-05-27T17:47:56.783602045Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 17:47:56.783721 dockerd[2197]: time="2025-05-27T17:47:56.783656888Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 17:47:56.783745 dockerd[2197]: time="2025-05-27T17:47:56.783730978Z" level=info msg="Initializing buildkit" May 27 17:47:56.819035 dockerd[2197]: time="2025-05-27T17:47:56.819000716Z" level=info msg="Completed buildkit initialization" May 27 17:47:56.824746 dockerd[2197]: time="2025-05-27T17:47:56.824721409Z" level=info msg="Daemon has completed initialization" May 27 17:47:56.824894 dockerd[2197]: time="2025-05-27T17:47:56.824764229Z" level=info msg="API listen on /run/docker.sock" May 27 17:47:56.824878 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 17:47:59.081585 containerd[1720]: time="2025-05-27T17:47:59.081523759Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 27 17:47:59.766725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3667188521.mount: Deactivated successfully. May 27 17:48:00.975711 containerd[1720]: time="2025-05-27T17:48:00.975671119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:00.978070 containerd[1720]: time="2025-05-27T17:48:00.978038187Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=30075411" May 27 17:48:00.980485 containerd[1720]: time="2025-05-27T17:48:00.980448682Z" level=info msg="ImageCreate event name:\"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:00.983584 containerd[1720]: time="2025-05-27T17:48:00.983559450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:00.984207 containerd[1720]: time="2025-05-27T17:48:00.984066369Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"30072203\" in 1.902481905s" May 27 17:48:00.984207 containerd[1720]: time="2025-05-27T17:48:00.984094414Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\"" May 27 17:48:00.984718 containerd[1720]: time="2025-05-27T17:48:00.984692240Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 27 17:48:01.658086 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 27 17:48:01.660017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:48:02.326660 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:48:02.333772 (kubelet)[2464]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:48:02.377860 kubelet[2464]: E0527 17:48:02.377839 2464 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:48:02.379669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:48:02.379799 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:48:02.380096 systemd[1]: kubelet.service: Consumed 123ms CPU time, 110.3M memory peak. May 27 17:48:02.409543 kernel: hv_balloon: Max. dynamic memory size: 8192 MB May 27 17:48:02.704754 containerd[1720]: time="2025-05-27T17:48:02.704721628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:02.706811 containerd[1720]: time="2025-05-27T17:48:02.706781289Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=26011398" May 27 17:48:02.709784 containerd[1720]: time="2025-05-27T17:48:02.709753729Z" level=info msg="ImageCreate event name:\"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:02.715822 containerd[1720]: time="2025-05-27T17:48:02.715783742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:02.716480 containerd[1720]: time="2025-05-27T17:48:02.716377904Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"27638910\" in 1.731654486s" May 27 17:48:02.716480 containerd[1720]: time="2025-05-27T17:48:02.716402876Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\"" May 27 17:48:02.716848 containerd[1720]: time="2025-05-27T17:48:02.716828949Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 27 17:48:02.897227 update_engine[1704]: I20250527 17:48:02.897176 1704 update_attempter.cc:509] Updating boot flags... May 27 17:48:04.117463 containerd[1720]: time="2025-05-27T17:48:04.117419887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:04.119416 containerd[1720]: time="2025-05-27T17:48:04.119386668Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=20148968" May 27 17:48:04.121614 containerd[1720]: time="2025-05-27T17:48:04.121582913Z" level=info msg="ImageCreate event name:\"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:04.124911 containerd[1720]: time="2025-05-27T17:48:04.124860162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:04.125362 containerd[1720]: time="2025-05-27T17:48:04.125315597Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"21776498\" in 1.408373589s" May 27 17:48:04.125362 containerd[1720]: time="2025-05-27T17:48:04.125344748Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\"" May 27 17:48:04.125771 containerd[1720]: time="2025-05-27T17:48:04.125738782Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 27 17:48:06.935054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount215245903.mount: Deactivated successfully. May 27 17:48:07.315466 containerd[1720]: time="2025-05-27T17:48:07.315372676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:07.319429 containerd[1720]: time="2025-05-27T17:48:07.319395394Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=31889083" May 27 17:48:07.323019 containerd[1720]: time="2025-05-27T17:48:07.322975065Z" level=info msg="ImageCreate event name:\"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:07.325905 containerd[1720]: time="2025-05-27T17:48:07.325871931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:07.326317 containerd[1720]: time="2025-05-27T17:48:07.326105732Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"31888094\" in 3.200344822s" May 27 17:48:07.326317 containerd[1720]: time="2025-05-27T17:48:07.326130864Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 27 17:48:07.326664 containerd[1720]: time="2025-05-27T17:48:07.326644858Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 27 17:48:07.919649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3429160481.mount: Deactivated successfully. May 27 17:48:08.810776 containerd[1720]: time="2025-05-27T17:48:08.810732563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:08.813562 containerd[1720]: time="2025-05-27T17:48:08.813524995Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" May 27 17:48:08.815904 containerd[1720]: time="2025-05-27T17:48:08.815866274Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:08.819117 containerd[1720]: time="2025-05-27T17:48:08.819080034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:08.819760 containerd[1720]: time="2025-05-27T17:48:08.819656958Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.492987364s" May 27 17:48:08.819760 containerd[1720]: time="2025-05-27T17:48:08.819684436Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" May 27 17:48:08.820166 containerd[1720]: time="2025-05-27T17:48:08.820137789Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 17:48:09.323043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount865412629.mount: Deactivated successfully. May 27 17:48:09.340852 containerd[1720]: time="2025-05-27T17:48:09.340819758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:48:09.343889 containerd[1720]: time="2025-05-27T17:48:09.343861036Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 27 17:48:09.347045 containerd[1720]: time="2025-05-27T17:48:09.347012824Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:48:09.350453 containerd[1720]: time="2025-05-27T17:48:09.350415393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:48:09.351040 containerd[1720]: time="2025-05-27T17:48:09.350768558Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 530.593155ms" May 27 17:48:09.351040 containerd[1720]: time="2025-05-27T17:48:09.350794990Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 17:48:09.351253 containerd[1720]: time="2025-05-27T17:48:09.351229564Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 27 17:48:11.263905 containerd[1720]: time="2025-05-27T17:48:11.263866209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:11.265696 containerd[1720]: time="2025-05-27T17:48:11.265664841Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142747" May 27 17:48:11.268036 containerd[1720]: time="2025-05-27T17:48:11.268002635Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:11.270944 containerd[1720]: time="2025-05-27T17:48:11.270907213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:11.271577 containerd[1720]: time="2025-05-27T17:48:11.271459783Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 1.920206503s" May 27 17:48:11.271577 containerd[1720]: time="2025-05-27T17:48:11.271485130Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 27 17:48:12.408168 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 27 17:48:12.411715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:48:12.881964 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:48:12.888844 (kubelet)[2613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:48:12.923600 kubelet[2613]: E0527 17:48:12.923572 2613 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:48:12.925560 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:48:12.925673 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:48:12.926162 systemd[1]: kubelet.service: Consumed 130ms CPU time, 108.2M memory peak. May 27 17:48:13.966902 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:48:13.967186 systemd[1]: kubelet.service: Consumed 130ms CPU time, 108.2M memory peak. May 27 17:48:13.969050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:48:13.987296 systemd[1]: Reload requested from client PID 2628 ('systemctl') (unit session-9.scope)... May 27 17:48:13.987307 systemd[1]: Reloading... May 27 17:48:14.054590 zram_generator::config[2670]: No configuration found. May 27 17:48:14.139601 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:48:14.234709 systemd[1]: Reloading finished in 247 ms. May 27 17:48:14.362608 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 17:48:14.362686 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 17:48:14.362994 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:48:14.363044 systemd[1]: kubelet.service: Consumed 73ms CPU time, 83.3M memory peak. May 27 17:48:14.364348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:48:15.474896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:48:15.482727 (kubelet)[2739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:48:15.514583 kubelet[2739]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:48:15.515556 kubelet[2739]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:48:15.515556 kubelet[2739]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:48:15.515556 kubelet[2739]: I0527 17:48:15.514866 2739 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:48:15.909276 kubelet[2739]: I0527 17:48:15.909253 2739 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 17:48:15.909276 kubelet[2739]: I0527 17:48:15.909269 2739 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:48:15.909443 kubelet[2739]: I0527 17:48:15.909433 2739 server.go:956] "Client rotation is on, will bootstrap in background" May 27 17:48:15.935555 kubelet[2739]: E0527 17:48:15.934030 2739 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 27 17:48:15.935555 kubelet[2739]: I0527 17:48:15.935416 2739 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:48:15.941056 kubelet[2739]: I0527 17:48:15.941042 2739 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:48:15.944435 kubelet[2739]: I0527 17:48:15.944420 2739 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:48:15.944614 kubelet[2739]: I0527 17:48:15.944596 2739 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:48:15.944733 kubelet[2739]: I0527 17:48:15.944613 2739 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.0.0-a-92788821a5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:48:15.944833 kubelet[2739]: I0527 17:48:15.944737 2739 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:48:15.944833 kubelet[2739]: I0527 17:48:15.944746 2739 container_manager_linux.go:303] "Creating device plugin manager" May 27 17:48:15.944874 kubelet[2739]: I0527 17:48:15.944836 2739 state_mem.go:36] "Initialized new in-memory state store" May 27 17:48:15.947437 kubelet[2739]: I0527 17:48:15.947368 2739 kubelet.go:480] "Attempting to sync node with API server" May 27 17:48:15.947437 kubelet[2739]: I0527 17:48:15.947383 2739 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:48:15.948713 kubelet[2739]: I0527 17:48:15.948695 2739 kubelet.go:386] "Adding apiserver pod source" May 27 17:48:15.948713 kubelet[2739]: I0527 17:48:15.948714 2739 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:48:15.955331 kubelet[2739]: E0527 17:48:15.955208 2739 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.0.0-a-92788821a5&limit=500&resourceVersion=0\": dial tcp 10.200.8.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 17:48:15.956329 kubelet[2739]: E0527 17:48:15.956309 2739 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 17:48:15.956885 kubelet[2739]: I0527 17:48:15.956444 2739 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:48:15.956885 kubelet[2739]: I0527 17:48:15.956828 2739 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 17:48:15.957885 kubelet[2739]: W0527 17:48:15.957877 2739 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 17:48:15.959551 kubelet[2739]: I0527 17:48:15.959526 2739 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:48:15.959607 kubelet[2739]: I0527 17:48:15.959579 2739 server.go:1289] "Started kubelet" May 27 17:48:16.008509 kubelet[2739]: I0527 17:48:16.008469 2739 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:48:16.009007 kubelet[2739]: I0527 17:48:16.008902 2739 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:48:16.009007 kubelet[2739]: I0527 17:48:16.008900 2739 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:48:16.009972 kubelet[2739]: I0527 17:48:16.009715 2739 server.go:317] "Adding debug handlers to kubelet server" May 27 17:48:16.011316 kubelet[2739]: I0527 17:48:16.011292 2739 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:48:16.014524 kubelet[2739]: I0527 17:48:16.014496 2739 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:48:16.017585 kubelet[2739]: E0527 17:48:16.015017 2739 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.19:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.19:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.0.0-a-92788821a5.18437385204b9814 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.0.0-a-92788821a5,UID:ci-4344.0.0-a-92788821a5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.0.0-a-92788821a5,},FirstTimestamp:2025-05-27 17:48:15.959554068 +0000 UTC m=+0.474217298,LastTimestamp:2025-05-27 17:48:15.959554068 +0000 UTC m=+0.474217298,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.0.0-a-92788821a5,}" May 27 17:48:16.017585 kubelet[2739]: I0527 17:48:16.017555 2739 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:48:16.018639 kubelet[2739]: E0527 17:48:16.018620 2739 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.0.0-a-92788821a5\" not found" May 27 17:48:16.020103 kubelet[2739]: I0527 17:48:16.019367 2739 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:48:16.020103 kubelet[2739]: I0527 17:48:16.019418 2739 reconciler.go:26] "Reconciler: start to sync state" May 27 17:48:16.020103 kubelet[2739]: E0527 17:48:16.019853 2739 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 17:48:16.020103 kubelet[2739]: E0527 17:48:16.019910 2739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-a-92788821a5?timeout=10s\": dial tcp 10.200.8.19:6443: connect: connection refused" interval="200ms" May 27 17:48:16.021334 kubelet[2739]: I0527 17:48:16.021310 2739 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:48:16.024102 kubelet[2739]: I0527 17:48:16.024090 2739 factory.go:223] Registration of the containerd container factory successfully May 27 17:48:16.024183 kubelet[2739]: I0527 17:48:16.024178 2739 factory.go:223] Registration of the systemd container factory successfully May 27 17:48:16.027142 kubelet[2739]: E0527 17:48:16.026933 2739 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:48:16.051300 kubelet[2739]: I0527 17:48:16.051286 2739 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:48:16.051300 kubelet[2739]: I0527 17:48:16.051297 2739 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:48:16.051384 kubelet[2739]: I0527 17:48:16.051309 2739 state_mem.go:36] "Initialized new in-memory state store" May 27 17:48:16.071448 kubelet[2739]: I0527 17:48:16.071412 2739 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 17:48:16.072418 kubelet[2739]: I0527 17:48:16.072392 2739 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 17:48:16.072418 kubelet[2739]: I0527 17:48:16.072415 2739 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 17:48:16.072498 kubelet[2739]: I0527 17:48:16.072432 2739 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:48:16.072498 kubelet[2739]: I0527 17:48:16.072439 2739 kubelet.go:2436] "Starting kubelet main sync loop" May 27 17:48:16.072498 kubelet[2739]: E0527 17:48:16.072466 2739 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:48:16.073861 kubelet[2739]: E0527 17:48:16.073834 2739 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 17:48:16.119662 kubelet[2739]: E0527 17:48:16.119640 2739 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.0.0-a-92788821a5\" not found" May 27 17:48:16.150245 kubelet[2739]: I0527 17:48:16.150233 2739 policy_none.go:49] "None policy: Start" May 27 17:48:16.150298 kubelet[2739]: I0527 17:48:16.150248 2739 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:48:16.150298 kubelet[2739]: I0527 17:48:16.150259 2739 state_mem.go:35] "Initializing new in-memory state store" May 27 17:48:16.158462 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 17:48:16.168936 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 17:48:16.171786 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 17:48:16.173424 kubelet[2739]: E0527 17:48:16.173406 2739 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 17:48:16.190004 kubelet[2739]: E0527 17:48:16.189992 2739 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 17:48:16.190365 kubelet[2739]: I0527 17:48:16.190354 2739 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:48:16.190403 kubelet[2739]: I0527 17:48:16.190367 2739 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:48:16.190695 kubelet[2739]: I0527 17:48:16.190682 2739 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:48:16.191949 kubelet[2739]: E0527 17:48:16.191907 2739 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:48:16.192042 kubelet[2739]: E0527 17:48:16.192035 2739 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.0.0-a-92788821a5\" not found" May 27 17:48:16.220282 kubelet[2739]: E0527 17:48:16.220241 2739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-a-92788821a5?timeout=10s\": dial tcp 10.200.8.19:6443: connect: connection refused" interval="400ms" May 27 17:48:16.292729 kubelet[2739]: I0527 17:48:16.292707 2739 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-92788821a5" May 27 17:48:16.293057 kubelet[2739]: E0527 17:48:16.293016 2739 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.19:6443/api/v1/nodes\": dial tcp 10.200.8.19:6443: connect: connection refused" node="ci-4344.0.0-a-92788821a5" May 27 17:48:16.384880 systemd[1]: Created slice kubepods-burstable-pod0febc37fff69e04b8adbcb60bafc0b7c.slice - libcontainer container kubepods-burstable-pod0febc37fff69e04b8adbcb60bafc0b7c.slice. May 27 17:48:16.392113 kubelet[2739]: E0527 17:48:16.391993 2739 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-92788821a5\" not found" node="ci-4344.0.0-a-92788821a5" May 27 17:48:16.395077 systemd[1]: Created slice kubepods-burstable-pod6cb63268944f76885787285ed9f0e621.slice - libcontainer container kubepods-burstable-pod6cb63268944f76885787285ed9f0e621.slice. May 27 17:48:16.400388 kubelet[2739]: E0527 17:48:16.400273 2739 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-92788821a5\" not found" node="ci-4344.0.0-a-92788821a5" May 27 17:48:16.402208 systemd[1]: Created slice kubepods-burstable-pod074ebd10af7204d9677bd6a499ceb329.slice - libcontainer container kubepods-burstable-pod074ebd10af7204d9677bd6a499ceb329.slice. May 27 17:48:16.403395 kubelet[2739]: E0527 17:48:16.403380 2739 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-92788821a5\" not found" node="ci-4344.0.0-a-92788821a5" May 27 17:48:16.420977 kubelet[2739]: I0527 17:48:16.420829 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6cb63268944f76885787285ed9f0e621-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.0.0-a-92788821a5\" (UID: \"6cb63268944f76885787285ed9f0e621\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-92788821a5" May 27 17:48:16.420977 kubelet[2739]: I0527 17:48:16.420852 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/074ebd10af7204d9677bd6a499ceb329-k8s-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-92788821a5\" (UID: \"074ebd10af7204d9677bd6a499ceb329\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-92788821a5" May 27 17:48:16.420977 kubelet[2739]: I0527 17:48:16.420868 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/074ebd10af7204d9677bd6a499ceb329-kubeconfig\") pod \"kube-controller-manager-ci-4344.0.0-a-92788821a5\" (UID: \"074ebd10af7204d9677bd6a499ceb329\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-92788821a5" May 27 17:48:16.420977 kubelet[2739]: I0527 17:48:16.420886 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/074ebd10af7204d9677bd6a499ceb329-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.0.0-a-92788821a5\" (UID: \"074ebd10af7204d9677bd6a499ceb329\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-92788821a5" May 27 17:48:16.420977 kubelet[2739]: I0527 17:48:16.420898 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0febc37fff69e04b8adbcb60bafc0b7c-kubeconfig\") pod \"kube-scheduler-ci-4344.0.0-a-92788821a5\" (UID: \"0febc37fff69e04b8adbcb60bafc0b7c\") " pod="kube-system/kube-scheduler-ci-4344.0.0-a-92788821a5" May 27 17:48:16.421144 kubelet[2739]: I0527 17:48:16.420914 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6cb63268944f76885787285ed9f0e621-ca-certs\") pod \"kube-apiserver-ci-4344.0.0-a-92788821a5\" (UID: \"6cb63268944f76885787285ed9f0e621\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-92788821a5" May 27 17:48:16.421144 kubelet[2739]: I0527 17:48:16.420927 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6cb63268944f76885787285ed9f0e621-k8s-certs\") pod \"kube-apiserver-ci-4344.0.0-a-92788821a5\" (UID: \"6cb63268944f76885787285ed9f0e621\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-92788821a5" May 27 17:48:16.421144 kubelet[2739]: I0527 17:48:16.420950 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/074ebd10af7204d9677bd6a499ceb329-ca-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-92788821a5\" (UID: \"074ebd10af7204d9677bd6a499ceb329\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-92788821a5" May 27 17:48:16.421144 kubelet[2739]: I0527 17:48:16.421049 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/074ebd10af7204d9677bd6a499ceb329-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.0.0-a-92788821a5\" (UID: \"074ebd10af7204d9677bd6a499ceb329\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-92788821a5" May 27 17:48:16.494631 kubelet[2739]: I0527 17:48:16.494596 2739 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-92788821a5" May 27 17:48:16.494886 kubelet[2739]: E0527 17:48:16.494851 2739 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.19:6443/api/v1/nodes\": dial tcp 10.200.8.19:6443: connect: connection refused" node="ci-4344.0.0-a-92788821a5" May 27 17:48:16.621092 kubelet[2739]: E0527 17:48:16.621068 2739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-a-92788821a5?timeout=10s\": dial tcp 10.200.8.19:6443: connect: connection refused" interval="800ms" May 27 17:48:16.692928 containerd[1720]: time="2025-05-27T17:48:16.692833161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.0.0-a-92788821a5,Uid:0febc37fff69e04b8adbcb60bafc0b7c,Namespace:kube-system,Attempt:0,}" May 27 17:48:16.701253 containerd[1720]: time="2025-05-27T17:48:16.701227887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.0.0-a-92788821a5,Uid:6cb63268944f76885787285ed9f0e621,Namespace:kube-system,Attempt:0,}" May 27 17:48:16.707489 containerd[1720]: time="2025-05-27T17:48:16.707431111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.0.0-a-92788821a5,Uid:074ebd10af7204d9677bd6a499ceb329,Namespace:kube-system,Attempt:0,}" May 27 17:48:16.765516 containerd[1720]: time="2025-05-27T17:48:16.765479473Z" level=info msg="connecting to shim 3105db567a5224497a7c287d75af452ada1f25c4c35eaa49fc96e88fed16887b" address="unix:///run/containerd/s/fd21fc7ef01253a0105b2de3583d519728ae8b12078aa1423ef9603244edc055" namespace=k8s.io protocol=ttrpc version=3 May 27 17:48:16.786042 containerd[1720]: time="2025-05-27T17:48:16.785973929Z" level=info msg="connecting to shim e96fce842af0464836936858431390b7841ec172efeb9559f7a801faf6b79834" address="unix:///run/containerd/s/d762e0c9f463044877c2c1c5157a1de9878d91a7bac8f7b4f89304be41152622" namespace=k8s.io protocol=ttrpc version=3 May 27 17:48:16.793810 containerd[1720]: time="2025-05-27T17:48:16.793752844Z" level=info msg="connecting to shim f7d5c914035e24ad53cd432322f666f23c65fc5de262bd97e2b62623caca159e" address="unix:///run/containerd/s/91ef3af1656c339743e80884f41061ebab4d94192a8f1a1a05fd6da613701042" namespace=k8s.io protocol=ttrpc version=3 May 27 17:48:16.800673 systemd[1]: Started cri-containerd-3105db567a5224497a7c287d75af452ada1f25c4c35eaa49fc96e88fed16887b.scope - libcontainer container 3105db567a5224497a7c287d75af452ada1f25c4c35eaa49fc96e88fed16887b. May 27 17:48:16.814807 systemd[1]: Started cri-containerd-e96fce842af0464836936858431390b7841ec172efeb9559f7a801faf6b79834.scope - libcontainer container e96fce842af0464836936858431390b7841ec172efeb9559f7a801faf6b79834. May 27 17:48:16.830650 systemd[1]: Started cri-containerd-f7d5c914035e24ad53cd432322f666f23c65fc5de262bd97e2b62623caca159e.scope - libcontainer container f7d5c914035e24ad53cd432322f666f23c65fc5de262bd97e2b62623caca159e. May 27 17:48:16.867679 containerd[1720]: time="2025-05-27T17:48:16.867659191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.0.0-a-92788821a5,Uid:0febc37fff69e04b8adbcb60bafc0b7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3105db567a5224497a7c287d75af452ada1f25c4c35eaa49fc96e88fed16887b\"" May 27 17:48:16.875158 containerd[1720]: time="2025-05-27T17:48:16.875007750Z" level=info msg="CreateContainer within sandbox \"3105db567a5224497a7c287d75af452ada1f25c4c35eaa49fc96e88fed16887b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 17:48:16.889189 containerd[1720]: time="2025-05-27T17:48:16.889172905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.0.0-a-92788821a5,Uid:074ebd10af7204d9677bd6a499ceb329,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7d5c914035e24ad53cd432322f666f23c65fc5de262bd97e2b62623caca159e\"" May 27 17:48:16.891812 containerd[1720]: time="2025-05-27T17:48:16.891787404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.0.0-a-92788821a5,Uid:6cb63268944f76885787285ed9f0e621,Namespace:kube-system,Attempt:0,} returns sandbox id \"e96fce842af0464836936858431390b7841ec172efeb9559f7a801faf6b79834\"" May 27 17:48:16.895778 kubelet[2739]: I0527 17:48:16.895762 2739 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-92788821a5" May 27 17:48:16.896428 kubelet[2739]: E0527 17:48:16.895956 2739 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.19:6443/api/v1/nodes\": dial tcp 10.200.8.19:6443: connect: connection refused" node="ci-4344.0.0-a-92788821a5" May 27 17:48:16.896685 containerd[1720]: time="2025-05-27T17:48:16.896663723Z" level=info msg="CreateContainer within sandbox \"f7d5c914035e24ad53cd432322f666f23c65fc5de262bd97e2b62623caca159e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 17:48:16.907942 containerd[1720]: time="2025-05-27T17:48:16.907921136Z" level=info msg="Container 0c76c5a9e67faadc5271bd15a8c44808f40abc17bad9a63011598d82f3ecfc2b: CDI devices from CRI Config.CDIDevices: []" May 27 17:48:16.908740 containerd[1720]: time="2025-05-27T17:48:16.908716210Z" level=info msg="CreateContainer within sandbox \"e96fce842af0464836936858431390b7841ec172efeb9559f7a801faf6b79834\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 17:48:16.918375 containerd[1720]: time="2025-05-27T17:48:16.918354353Z" level=info msg="Container 09cacce9140aa69414f371ee2b0c31d7e1e6f1a97fbe4655a88716477c8b22cb: CDI devices from CRI Config.CDIDevices: []" May 27 17:48:16.940280 containerd[1720]: time="2025-05-27T17:48:16.940258445Z" level=info msg="CreateContainer within sandbox \"3105db567a5224497a7c287d75af452ada1f25c4c35eaa49fc96e88fed16887b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0c76c5a9e67faadc5271bd15a8c44808f40abc17bad9a63011598d82f3ecfc2b\"" May 27 17:48:16.940759 containerd[1720]: time="2025-05-27T17:48:16.940738960Z" level=info msg="StartContainer for \"0c76c5a9e67faadc5271bd15a8c44808f40abc17bad9a63011598d82f3ecfc2b\"" May 27 17:48:16.943772 containerd[1720]: time="2025-05-27T17:48:16.943312399Z" level=info msg="connecting to shim 0c76c5a9e67faadc5271bd15a8c44808f40abc17bad9a63011598d82f3ecfc2b" address="unix:///run/containerd/s/fd21fc7ef01253a0105b2de3583d519728ae8b12078aa1423ef9603244edc055" protocol=ttrpc version=3 May 27 17:48:16.945181 containerd[1720]: time="2025-05-27T17:48:16.945108158Z" level=info msg="CreateContainer within sandbox \"f7d5c914035e24ad53cd432322f666f23c65fc5de262bd97e2b62623caca159e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"09cacce9140aa69414f371ee2b0c31d7e1e6f1a97fbe4655a88716477c8b22cb\"" May 27 17:48:16.946173 containerd[1720]: time="2025-05-27T17:48:16.946112102Z" level=info msg="StartContainer for \"09cacce9140aa69414f371ee2b0c31d7e1e6f1a97fbe4655a88716477c8b22cb\"" May 27 17:48:16.947169 containerd[1720]: time="2025-05-27T17:48:16.947144372Z" level=info msg="Container e9a44527b5de4bb3f4e575f0462226a188708b2a8a5a57c7cc560dd12bbfc8e5: CDI devices from CRI Config.CDIDevices: []" May 27 17:48:16.948976 containerd[1720]: time="2025-05-27T17:48:16.948943356Z" level=info msg="connecting to shim 09cacce9140aa69414f371ee2b0c31d7e1e6f1a97fbe4655a88716477c8b22cb" address="unix:///run/containerd/s/91ef3af1656c339743e80884f41061ebab4d94192a8f1a1a05fd6da613701042" protocol=ttrpc version=3 May 27 17:48:16.957659 systemd[1]: Started cri-containerd-0c76c5a9e67faadc5271bd15a8c44808f40abc17bad9a63011598d82f3ecfc2b.scope - libcontainer container 0c76c5a9e67faadc5271bd15a8c44808f40abc17bad9a63011598d82f3ecfc2b. May 27 17:48:16.966382 containerd[1720]: time="2025-05-27T17:48:16.965999768Z" level=info msg="CreateContainer within sandbox \"e96fce842af0464836936858431390b7841ec172efeb9559f7a801faf6b79834\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e9a44527b5de4bb3f4e575f0462226a188708b2a8a5a57c7cc560dd12bbfc8e5\"" May 27 17:48:16.966899 containerd[1720]: time="2025-05-27T17:48:16.966880303Z" level=info msg="StartContainer for \"e9a44527b5de4bb3f4e575f0462226a188708b2a8a5a57c7cc560dd12bbfc8e5\"" May 27 17:48:16.968941 containerd[1720]: time="2025-05-27T17:48:16.968757781Z" level=info msg="connecting to shim e9a44527b5de4bb3f4e575f0462226a188708b2a8a5a57c7cc560dd12bbfc8e5" address="unix:///run/containerd/s/d762e0c9f463044877c2c1c5157a1de9878d91a7bac8f7b4f89304be41152622" protocol=ttrpc version=3 May 27 17:48:16.974672 systemd[1]: Started cri-containerd-09cacce9140aa69414f371ee2b0c31d7e1e6f1a97fbe4655a88716477c8b22cb.scope - libcontainer container 09cacce9140aa69414f371ee2b0c31d7e1e6f1a97fbe4655a88716477c8b22cb. May 27 17:48:16.988523 systemd[1]: Started cri-containerd-e9a44527b5de4bb3f4e575f0462226a188708b2a8a5a57c7cc560dd12bbfc8e5.scope - libcontainer container e9a44527b5de4bb3f4e575f0462226a188708b2a8a5a57c7cc560dd12bbfc8e5. May 27 17:48:16.998076 kubelet[2739]: E0527 17:48:16.998049 2739 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 17:48:17.032412 containerd[1720]: time="2025-05-27T17:48:17.032385931Z" level=info msg="StartContainer for \"0c76c5a9e67faadc5271bd15a8c44808f40abc17bad9a63011598d82f3ecfc2b\" returns successfully" May 27 17:48:17.046009 containerd[1720]: time="2025-05-27T17:48:17.045987897Z" level=info msg="StartContainer for \"09cacce9140aa69414f371ee2b0c31d7e1e6f1a97fbe4655a88716477c8b22cb\" returns successfully" May 27 17:48:17.063834 kubelet[2739]: E0527 17:48:17.063811 2739 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 17:48:17.073505 containerd[1720]: time="2025-05-27T17:48:17.073476781Z" level=info msg="StartContainer for \"e9a44527b5de4bb3f4e575f0462226a188708b2a8a5a57c7cc560dd12bbfc8e5\" returns successfully" May 27 17:48:17.087332 kubelet[2739]: E0527 17:48:17.087247 2739 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-92788821a5\" not found" node="ci-4344.0.0-a-92788821a5" May 27 17:48:17.089215 kubelet[2739]: E0527 17:48:17.089198 2739 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-92788821a5\" not found" node="ci-4344.0.0-a-92788821a5" May 27 17:48:17.092932 kubelet[2739]: E0527 17:48:17.092916 2739 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-92788821a5\" not found" node="ci-4344.0.0-a-92788821a5" May 27 17:48:17.699395 kubelet[2739]: I0527 17:48:17.699377 2739 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-92788821a5" May 27 17:48:18.095258 kubelet[2739]: E0527 17:48:18.095194 2739 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-92788821a5\" not found" node="ci-4344.0.0-a-92788821a5" May 27 17:48:18.095488 kubelet[2739]: E0527 17:48:18.095458 2739 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-92788821a5\" not found" node="ci-4344.0.0-a-92788821a5" May 27 17:48:18.811551 kubelet[2739]: I0527 17:48:18.811476 2739 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.0.0-a-92788821a5" May 27 17:48:18.818449 kubelet[2739]: I0527 17:48:18.818429 2739 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-a-92788821a5" May 27 17:48:18.866939 kubelet[2739]: E0527 17:48:18.866920 2739 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.0.0-a-92788821a5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.0.0-a-92788821a5" May 27 17:48:18.867112 kubelet[2739]: I0527 17:48:18.867044 2739 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-a-92788821a5" May 27 17:48:18.868426 kubelet[2739]: E0527 17:48:18.868409 2739 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.0.0-a-92788821a5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.0.0-a-92788821a5" May 27 17:48:18.868596 kubelet[2739]: I0527 17:48:18.868456 2739 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-92788821a5" May 27 17:48:18.869788 kubelet[2739]: E0527 17:48:18.869766 2739 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.0.0-a-92788821a5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-92788821a5" May 27 17:48:18.907902 kubelet[2739]: E0527 17:48:18.907867 2739 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="1.6s" May 27 17:48:18.958733 kubelet[2739]: I0527 17:48:18.958703 2739 apiserver.go:52] "Watching apiserver" May 27 17:48:19.019447 kubelet[2739]: I0527 17:48:19.019427 2739 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:48:19.095285 kubelet[2739]: I0527 17:48:19.095220 2739 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-a-92788821a5" May 27 17:48:19.096955 kubelet[2739]: E0527 17:48:19.096919 2739 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.0.0-a-92788821a5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.0.0-a-92788821a5" May 27 17:48:20.948824 systemd[1]: Reload requested from client PID 3020 ('systemctl') (unit session-9.scope)... May 27 17:48:20.948837 systemd[1]: Reloading... May 27 17:48:21.004555 zram_generator::config[3065]: No configuration found. May 27 17:48:21.090569 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:48:21.184259 systemd[1]: Reloading finished in 235 ms. May 27 17:48:21.203991 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:48:21.221281 systemd[1]: kubelet.service: Deactivated successfully. May 27 17:48:21.221488 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:48:21.221527 systemd[1]: kubelet.service: Consumed 748ms CPU time, 130M memory peak. May 27 17:48:21.222943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:48:22.836226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:48:22.842304 (kubelet)[3133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:48:22.876239 kubelet[3133]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:48:22.876239 kubelet[3133]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:48:22.876239 kubelet[3133]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:48:22.876474 kubelet[3133]: I0527 17:48:22.876279 3133 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:48:22.884066 kubelet[3133]: I0527 17:48:22.882755 3133 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 17:48:22.884066 kubelet[3133]: I0527 17:48:22.882772 3133 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:48:22.884066 kubelet[3133]: I0527 17:48:22.883185 3133 server.go:956] "Client rotation is on, will bootstrap in background" May 27 17:48:22.885107 kubelet[3133]: I0527 17:48:22.885082 3133 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 27 17:48:22.887085 kubelet[3133]: I0527 17:48:22.887071 3133 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:48:22.889971 kubelet[3133]: I0527 17:48:22.889961 3133 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:48:22.892400 kubelet[3133]: I0527 17:48:22.892387 3133 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:48:22.893269 kubelet[3133]: I0527 17:48:22.892786 3133 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:48:22.893269 kubelet[3133]: I0527 17:48:22.892814 3133 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.0.0-a-92788821a5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:48:22.893269 kubelet[3133]: I0527 17:48:22.892995 3133 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:48:22.893269 kubelet[3133]: I0527 17:48:22.893013 3133 container_manager_linux.go:303] "Creating device plugin manager" May 27 17:48:22.893269 kubelet[3133]: I0527 17:48:22.893053 3133 state_mem.go:36] "Initialized new in-memory state store" May 27 17:48:22.893504 kubelet[3133]: I0527 17:48:22.893178 3133 kubelet.go:480] "Attempting to sync node with API server" May 27 17:48:22.893504 kubelet[3133]: I0527 17:48:22.893191 3133 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:48:22.893504 kubelet[3133]: I0527 17:48:22.893209 3133 kubelet.go:386] "Adding apiserver pod source" May 27 17:48:22.893504 kubelet[3133]: I0527 17:48:22.893217 3133 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:48:22.894181 kubelet[3133]: I0527 17:48:22.894169 3133 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:48:22.894622 kubelet[3133]: I0527 17:48:22.894606 3133 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 17:48:22.899583 kubelet[3133]: I0527 17:48:22.896673 3133 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:48:22.899583 kubelet[3133]: I0527 17:48:22.896717 3133 server.go:1289] "Started kubelet" May 27 17:48:22.899583 kubelet[3133]: I0527 17:48:22.898808 3133 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:48:22.903034 kubelet[3133]: I0527 17:48:22.902971 3133 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:48:22.905033 kubelet[3133]: I0527 17:48:22.905016 3133 server.go:317] "Adding debug handlers to kubelet server" May 27 17:48:22.911843 kubelet[3133]: I0527 17:48:22.911804 3133 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:48:22.911994 kubelet[3133]: I0527 17:48:22.911977 3133 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:48:22.912165 kubelet[3133]: I0527 17:48:22.912152 3133 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:48:22.914723 kubelet[3133]: I0527 17:48:22.914635 3133 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 17:48:22.917124 kubelet[3133]: I0527 17:48:22.917054 3133 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:48:22.917595 kubelet[3133]: E0527 17:48:22.917574 3133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.0.0-a-92788821a5\" not found" May 27 17:48:22.918045 kubelet[3133]: I0527 17:48:22.918038 3133 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:48:22.918158 kubelet[3133]: I0527 17:48:22.918153 3133 reconciler.go:26] "Reconciler: start to sync state" May 27 17:48:22.929047 kubelet[3133]: I0527 17:48:22.929019 3133 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 17:48:22.929047 kubelet[3133]: I0527 17:48:22.929037 3133 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 17:48:22.929134 kubelet[3133]: I0527 17:48:22.929053 3133 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:48:22.929134 kubelet[3133]: I0527 17:48:22.929059 3133 kubelet.go:2436] "Starting kubelet main sync loop" May 27 17:48:22.929134 kubelet[3133]: E0527 17:48:22.929089 3133 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:48:22.931515 kubelet[3133]: I0527 17:48:22.931486 3133 factory.go:223] Registration of the systemd container factory successfully May 27 17:48:22.931684 kubelet[3133]: I0527 17:48:22.931670 3133 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:48:22.935555 kubelet[3133]: E0527 17:48:22.933640 3133 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:48:22.935555 kubelet[3133]: I0527 17:48:22.933716 3133 factory.go:223] Registration of the containerd container factory successfully May 27 17:48:22.973985 kubelet[3133]: I0527 17:48:22.973968 3133 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:48:22.974073 kubelet[3133]: I0527 17:48:22.974065 3133 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:48:22.974114 kubelet[3133]: I0527 17:48:22.974110 3133 state_mem.go:36] "Initialized new in-memory state store" May 27 17:48:22.974296 kubelet[3133]: I0527 17:48:22.974263 3133 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 17:48:22.974296 kubelet[3133]: I0527 17:48:22.974272 3133 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 17:48:22.974296 kubelet[3133]: I0527 17:48:22.974288 3133 policy_none.go:49] "None policy: Start" May 27 17:48:22.974380 kubelet[3133]: I0527 17:48:22.974375 3133 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:48:22.974414 kubelet[3133]: I0527 17:48:22.974410 3133 state_mem.go:35] "Initializing new in-memory state store" May 27 17:48:22.974587 kubelet[3133]: I0527 17:48:22.974582 3133 state_mem.go:75] "Updated machine memory state" May 27 17:48:22.979627 kubelet[3133]: E0527 17:48:22.979603 3133 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 17:48:22.979784 kubelet[3133]: I0527 17:48:22.979715 3133 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:48:22.979784 kubelet[3133]: I0527 17:48:22.979731 3133 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:48:22.980676 kubelet[3133]: I0527 17:48:22.980016 3133 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:48:22.981855 kubelet[3133]: E0527 17:48:22.981751 3133 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:48:23.029857 kubelet[3133]: I0527 17:48:23.029772 3133 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-a-92788821a5" May 27 17:48:23.030293 kubelet[3133]: I0527 17:48:23.030278 3133 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-a-92788821a5" May 27 17:48:23.030564 kubelet[3133]: I0527 17:48:23.030544 3133 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-92788821a5" May 27 17:48:23.040311 kubelet[3133]: I0527 17:48:23.040297 3133 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 17:48:23.040606 kubelet[3133]: I0527 17:48:23.040401 3133 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 17:48:23.041175 kubelet[3133]: I0527 17:48:23.041104 3133 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 17:48:23.082336 kubelet[3133]: I0527 17:48:23.082290 3133 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-92788821a5" May 27 17:48:23.094572 kubelet[3133]: I0527 17:48:23.094507 3133 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.0.0-a-92788821a5" May 27 17:48:23.094641 kubelet[3133]: I0527 17:48:23.094583 3133 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.0.0-a-92788821a5" May 27 17:48:23.120371 kubelet[3133]: I0527 17:48:23.120324 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6cb63268944f76885787285ed9f0e621-k8s-certs\") pod \"kube-apiserver-ci-4344.0.0-a-92788821a5\" (UID: \"6cb63268944f76885787285ed9f0e621\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-92788821a5" May 27 17:48:23.120371 kubelet[3133]: I0527 17:48:23.120352 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6cb63268944f76885787285ed9f0e621-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.0.0-a-92788821a5\" (UID: \"6cb63268944f76885787285ed9f0e621\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-92788821a5" May 27 17:48:23.120456 kubelet[3133]: I0527 17:48:23.120408 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/074ebd10af7204d9677bd6a499ceb329-ca-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-92788821a5\" (UID: \"074ebd10af7204d9677bd6a499ceb329\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-92788821a5" May 27 17:48:23.120456 kubelet[3133]: I0527 17:48:23.120423 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/074ebd10af7204d9677bd6a499ceb329-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.0.0-a-92788821a5\" (UID: \"074ebd10af7204d9677bd6a499ceb329\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-92788821a5" May 27 17:48:23.120506 kubelet[3133]: I0527 17:48:23.120438 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/074ebd10af7204d9677bd6a499ceb329-k8s-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-92788821a5\" (UID: \"074ebd10af7204d9677bd6a499ceb329\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-92788821a5" May 27 17:48:23.120506 kubelet[3133]: I0527 17:48:23.120484 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/074ebd10af7204d9677bd6a499ceb329-kubeconfig\") pod \"kube-controller-manager-ci-4344.0.0-a-92788821a5\" (UID: \"074ebd10af7204d9677bd6a499ceb329\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-92788821a5" May 27 17:48:23.120506 kubelet[3133]: I0527 17:48:23.120498 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/074ebd10af7204d9677bd6a499ceb329-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.0.0-a-92788821a5\" (UID: \"074ebd10af7204d9677bd6a499ceb329\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-92788821a5" May 27 17:48:23.121514 kubelet[3133]: I0527 17:48:23.120512 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6cb63268944f76885787285ed9f0e621-ca-certs\") pod \"kube-apiserver-ci-4344.0.0-a-92788821a5\" (UID: \"6cb63268944f76885787285ed9f0e621\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-92788821a5" May 27 17:48:23.121514 kubelet[3133]: I0527 17:48:23.120554 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0febc37fff69e04b8adbcb60bafc0b7c-kubeconfig\") pod \"kube-scheduler-ci-4344.0.0-a-92788821a5\" (UID: \"0febc37fff69e04b8adbcb60bafc0b7c\") " pod="kube-system/kube-scheduler-ci-4344.0.0-a-92788821a5" May 27 17:48:23.904142 kubelet[3133]: I0527 17:48:23.904120 3133 apiserver.go:52] "Watching apiserver" May 27 17:48:23.918994 kubelet[3133]: I0527 17:48:23.918966 3133 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:48:23.961780 kubelet[3133]: I0527 17:48:23.961755 3133 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-a-92788821a5" May 27 17:48:23.969345 kubelet[3133]: I0527 17:48:23.968754 3133 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 17:48:23.969345 kubelet[3133]: E0527 17:48:23.968797 3133 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.0.0-a-92788821a5\" already exists" pod="kube-system/kube-apiserver-ci-4344.0.0-a-92788821a5" May 27 17:48:23.982977 kubelet[3133]: I0527 17:48:23.982926 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.0.0-a-92788821a5" podStartSLOduration=0.982914022 podStartE2EDuration="982.914022ms" podCreationTimestamp="2025-05-27 17:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:48:23.975890377 +0000 UTC m=+1.129370687" watchObservedRunningTime="2025-05-27 17:48:23.982914022 +0000 UTC m=+1.136394348" May 27 17:48:23.990382 kubelet[3133]: I0527 17:48:23.990347 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.0.0-a-92788821a5" podStartSLOduration=0.99033625 podStartE2EDuration="990.33625ms" podCreationTimestamp="2025-05-27 17:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:48:23.982906238 +0000 UTC m=+1.136386551" watchObservedRunningTime="2025-05-27 17:48:23.99033625 +0000 UTC m=+1.143816560" May 27 17:48:23.998501 kubelet[3133]: I0527 17:48:23.998458 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-92788821a5" podStartSLOduration=0.998447735 podStartE2EDuration="998.447735ms" podCreationTimestamp="2025-05-27 17:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:48:23.990515646 +0000 UTC m=+1.143995955" watchObservedRunningTime="2025-05-27 17:48:23.998447735 +0000 UTC m=+1.151928044" May 27 17:48:27.892851 kubelet[3133]: I0527 17:48:27.892818 3133 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 17:48:27.893179 containerd[1720]: time="2025-05-27T17:48:27.893123156Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 17:48:27.893359 kubelet[3133]: I0527 17:48:27.893258 3133 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 17:48:28.906875 systemd[1]: Created slice kubepods-besteffort-pod21ef4d63_1397_4e4a_b1ca_8ae089f1b25a.slice - libcontainer container kubepods-besteffort-pod21ef4d63_1397_4e4a_b1ca_8ae089f1b25a.slice. May 27 17:48:28.959336 kubelet[3133]: I0527 17:48:28.959302 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/21ef4d63-1397-4e4a-b1ca-8ae089f1b25a-kube-proxy\") pod \"kube-proxy-mcll6\" (UID: \"21ef4d63-1397-4e4a-b1ca-8ae089f1b25a\") " pod="kube-system/kube-proxy-mcll6" May 27 17:48:28.959336 kubelet[3133]: I0527 17:48:28.959333 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21ef4d63-1397-4e4a-b1ca-8ae089f1b25a-xtables-lock\") pod \"kube-proxy-mcll6\" (UID: \"21ef4d63-1397-4e4a-b1ca-8ae089f1b25a\") " pod="kube-system/kube-proxy-mcll6" May 27 17:48:28.959683 kubelet[3133]: I0527 17:48:28.959350 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5zz9\" (UniqueName: \"kubernetes.io/projected/21ef4d63-1397-4e4a-b1ca-8ae089f1b25a-kube-api-access-l5zz9\") pod \"kube-proxy-mcll6\" (UID: \"21ef4d63-1397-4e4a-b1ca-8ae089f1b25a\") " pod="kube-system/kube-proxy-mcll6" May 27 17:48:28.959683 kubelet[3133]: I0527 17:48:28.959369 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21ef4d63-1397-4e4a-b1ca-8ae089f1b25a-lib-modules\") pod \"kube-proxy-mcll6\" (UID: \"21ef4d63-1397-4e4a-b1ca-8ae089f1b25a\") " pod="kube-system/kube-proxy-mcll6" May 27 17:48:29.097304 systemd[1]: Created slice kubepods-besteffort-poda8a64e5a_69ee_43ef_bff4_0829d1701feb.slice - libcontainer container kubepods-besteffort-poda8a64e5a_69ee_43ef_bff4_0829d1701feb.slice. May 27 17:48:29.160664 kubelet[3133]: I0527 17:48:29.160587 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a8a64e5a-69ee-43ef-bff4-0829d1701feb-var-lib-calico\") pod \"tigera-operator-844669ff44-72wxx\" (UID: \"a8a64e5a-69ee-43ef-bff4-0829d1701feb\") " pod="tigera-operator/tigera-operator-844669ff44-72wxx" May 27 17:48:29.160664 kubelet[3133]: I0527 17:48:29.160617 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkcc8\" (UniqueName: \"kubernetes.io/projected/a8a64e5a-69ee-43ef-bff4-0829d1701feb-kube-api-access-rkcc8\") pod \"tigera-operator-844669ff44-72wxx\" (UID: \"a8a64e5a-69ee-43ef-bff4-0829d1701feb\") " pod="tigera-operator/tigera-operator-844669ff44-72wxx" May 27 17:48:29.214175 containerd[1720]: time="2025-05-27T17:48:29.214109574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mcll6,Uid:21ef4d63-1397-4e4a-b1ca-8ae089f1b25a,Namespace:kube-system,Attempt:0,}" May 27 17:48:29.253315 containerd[1720]: time="2025-05-27T17:48:29.253249181Z" level=info msg="connecting to shim f30bdc141484a6c4326314b594d21d853a8acaee0d13732520ee066747f36f0b" address="unix:///run/containerd/s/97c8ac7146c7ca87dc2724a946741aa2ff073870e4352ca1bda9bdf90c9a906e" namespace=k8s.io protocol=ttrpc version=3 May 27 17:48:29.280675 systemd[1]: Started cri-containerd-f30bdc141484a6c4326314b594d21d853a8acaee0d13732520ee066747f36f0b.scope - libcontainer container f30bdc141484a6c4326314b594d21d853a8acaee0d13732520ee066747f36f0b. May 27 17:48:29.299657 containerd[1720]: time="2025-05-27T17:48:29.299634422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mcll6,Uid:21ef4d63-1397-4e4a-b1ca-8ae089f1b25a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f30bdc141484a6c4326314b594d21d853a8acaee0d13732520ee066747f36f0b\"" May 27 17:48:29.306610 containerd[1720]: time="2025-05-27T17:48:29.306585863Z" level=info msg="CreateContainer within sandbox \"f30bdc141484a6c4326314b594d21d853a8acaee0d13732520ee066747f36f0b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 17:48:29.333397 containerd[1720]: time="2025-05-27T17:48:29.333026681Z" level=info msg="Container 6256a46a5ef3b4266b89d08f9b7130902f3f6c255f51d9d653de7dc943b3e691: CDI devices from CRI Config.CDIDevices: []" May 27 17:48:29.348700 containerd[1720]: time="2025-05-27T17:48:29.348678711Z" level=info msg="CreateContainer within sandbox \"f30bdc141484a6c4326314b594d21d853a8acaee0d13732520ee066747f36f0b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6256a46a5ef3b4266b89d08f9b7130902f3f6c255f51d9d653de7dc943b3e691\"" May 27 17:48:29.349177 containerd[1720]: time="2025-05-27T17:48:29.349082351Z" level=info msg="StartContainer for \"6256a46a5ef3b4266b89d08f9b7130902f3f6c255f51d9d653de7dc943b3e691\"" May 27 17:48:29.350270 containerd[1720]: time="2025-05-27T17:48:29.350246105Z" level=info msg="connecting to shim 6256a46a5ef3b4266b89d08f9b7130902f3f6c255f51d9d653de7dc943b3e691" address="unix:///run/containerd/s/97c8ac7146c7ca87dc2724a946741aa2ff073870e4352ca1bda9bdf90c9a906e" protocol=ttrpc version=3 May 27 17:48:29.371652 systemd[1]: Started cri-containerd-6256a46a5ef3b4266b89d08f9b7130902f3f6c255f51d9d653de7dc943b3e691.scope - libcontainer container 6256a46a5ef3b4266b89d08f9b7130902f3f6c255f51d9d653de7dc943b3e691. May 27 17:48:29.399277 containerd[1720]: time="2025-05-27T17:48:29.399243848Z" level=info msg="StartContainer for \"6256a46a5ef3b4266b89d08f9b7130902f3f6c255f51d9d653de7dc943b3e691\" returns successfully" May 27 17:48:29.401048 containerd[1720]: time="2025-05-27T17:48:29.401023015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-72wxx,Uid:a8a64e5a-69ee-43ef-bff4-0829d1701feb,Namespace:tigera-operator,Attempt:0,}" May 27 17:48:29.439008 containerd[1720]: time="2025-05-27T17:48:29.438895802Z" level=info msg="connecting to shim 8fe5fbd3069493090b16052a0aec29f6860967f0c0ecb5380420d9338473a936" address="unix:///run/containerd/s/1cb1aebb07a5347675a51f9a3f072829e173310ee82d1010df36a3e31532564e" namespace=k8s.io protocol=ttrpc version=3 May 27 17:48:29.455778 systemd[1]: Started cri-containerd-8fe5fbd3069493090b16052a0aec29f6860967f0c0ecb5380420d9338473a936.scope - libcontainer container 8fe5fbd3069493090b16052a0aec29f6860967f0c0ecb5380420d9338473a936. May 27 17:48:29.506447 containerd[1720]: time="2025-05-27T17:48:29.506422359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-72wxx,Uid:a8a64e5a-69ee-43ef-bff4-0829d1701feb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8fe5fbd3069493090b16052a0aec29f6860967f0c0ecb5380420d9338473a936\"" May 27 17:48:29.507521 containerd[1720]: time="2025-05-27T17:48:29.507498107Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 17:48:30.427318 kubelet[3133]: I0527 17:48:30.427200 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mcll6" podStartSLOduration=2.427183581 podStartE2EDuration="2.427183581s" podCreationTimestamp="2025-05-27 17:48:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:48:29.997478579 +0000 UTC m=+7.150958891" watchObservedRunningTime="2025-05-27 17:48:30.427183581 +0000 UTC m=+7.580663951" May 27 17:48:31.518584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2516706241.mount: Deactivated successfully. May 27 17:48:32.107913 containerd[1720]: time="2025-05-27T17:48:32.107878720Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:32.110325 containerd[1720]: time="2025-05-27T17:48:32.110300421Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 27 17:48:32.112925 containerd[1720]: time="2025-05-27T17:48:32.112850681Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:32.117186 containerd[1720]: time="2025-05-27T17:48:32.117143892Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:32.117546 containerd[1720]: time="2025-05-27T17:48:32.117437865Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 2.609687338s" May 27 17:48:32.117546 containerd[1720]: time="2025-05-27T17:48:32.117461119Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 27 17:48:32.122903 containerd[1720]: time="2025-05-27T17:48:32.122876824Z" level=info msg="CreateContainer within sandbox \"8fe5fbd3069493090b16052a0aec29f6860967f0c0ecb5380420d9338473a936\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 17:48:32.139190 containerd[1720]: time="2025-05-27T17:48:32.137609371Z" level=info msg="Container 97f6a1d188cf0c28412b9573315ad9a01d620ae35f016d3616c86935e22615c4: CDI devices from CRI Config.CDIDevices: []" May 27 17:48:32.151272 containerd[1720]: time="2025-05-27T17:48:32.151251527Z" level=info msg="CreateContainer within sandbox \"8fe5fbd3069493090b16052a0aec29f6860967f0c0ecb5380420d9338473a936\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"97f6a1d188cf0c28412b9573315ad9a01d620ae35f016d3616c86935e22615c4\"" May 27 17:48:32.151626 containerd[1720]: time="2025-05-27T17:48:32.151578836Z" level=info msg="StartContainer for \"97f6a1d188cf0c28412b9573315ad9a01d620ae35f016d3616c86935e22615c4\"" May 27 17:48:32.152543 containerd[1720]: time="2025-05-27T17:48:32.152312371Z" level=info msg="connecting to shim 97f6a1d188cf0c28412b9573315ad9a01d620ae35f016d3616c86935e22615c4" address="unix:///run/containerd/s/1cb1aebb07a5347675a51f9a3f072829e173310ee82d1010df36a3e31532564e" protocol=ttrpc version=3 May 27 17:48:32.172676 systemd[1]: Started cri-containerd-97f6a1d188cf0c28412b9573315ad9a01d620ae35f016d3616c86935e22615c4.scope - libcontainer container 97f6a1d188cf0c28412b9573315ad9a01d620ae35f016d3616c86935e22615c4. May 27 17:48:32.200965 containerd[1720]: time="2025-05-27T17:48:32.200882005Z" level=info msg="StartContainer for \"97f6a1d188cf0c28412b9573315ad9a01d620ae35f016d3616c86935e22615c4\" returns successfully" May 27 17:48:34.490635 systemd[1]: cri-containerd-97f6a1d188cf0c28412b9573315ad9a01d620ae35f016d3616c86935e22615c4.scope: Deactivated successfully. May 27 17:48:34.493910 containerd[1720]: time="2025-05-27T17:48:34.493276740Z" level=info msg="received exit event container_id:\"97f6a1d188cf0c28412b9573315ad9a01d620ae35f016d3616c86935e22615c4\" id:\"97f6a1d188cf0c28412b9573315ad9a01d620ae35f016d3616c86935e22615c4\" pid:3459 exit_status:1 exited_at:{seconds:1748368114 nanos:490724158}" May 27 17:48:34.495692 containerd[1720]: time="2025-05-27T17:48:34.493523448Z" level=info msg="TaskExit event in podsandbox handler container_id:\"97f6a1d188cf0c28412b9573315ad9a01d620ae35f016d3616c86935e22615c4\" id:\"97f6a1d188cf0c28412b9573315ad9a01d620ae35f016d3616c86935e22615c4\" pid:3459 exit_status:1 exited_at:{seconds:1748368114 nanos:490724158}" May 27 17:48:34.523635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97f6a1d188cf0c28412b9573315ad9a01d620ae35f016d3616c86935e22615c4-rootfs.mount: Deactivated successfully. May 27 17:48:36.987414 kubelet[3133]: I0527 17:48:36.987227 3133 scope.go:117] "RemoveContainer" containerID="97f6a1d188cf0c28412b9573315ad9a01d620ae35f016d3616c86935e22615c4" May 27 17:48:36.989690 containerd[1720]: time="2025-05-27T17:48:36.989190514Z" level=info msg="CreateContainer within sandbox \"8fe5fbd3069493090b16052a0aec29f6860967f0c0ecb5380420d9338473a936\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 27 17:48:37.012334 containerd[1720]: time="2025-05-27T17:48:37.010996201Z" level=info msg="Container 6a64abb04a6e60c67302a532431b6f0e04115dcb1198997a94e512ad60606719: CDI devices from CRI Config.CDIDevices: []" May 27 17:48:37.043803 containerd[1720]: time="2025-05-27T17:48:37.043781181Z" level=info msg="CreateContainer within sandbox \"8fe5fbd3069493090b16052a0aec29f6860967f0c0ecb5380420d9338473a936\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"6a64abb04a6e60c67302a532431b6f0e04115dcb1198997a94e512ad60606719\"" May 27 17:48:37.044561 containerd[1720]: time="2025-05-27T17:48:37.044159097Z" level=info msg="StartContainer for \"6a64abb04a6e60c67302a532431b6f0e04115dcb1198997a94e512ad60606719\"" May 27 17:48:37.045154 containerd[1720]: time="2025-05-27T17:48:37.045132292Z" level=info msg="connecting to shim 6a64abb04a6e60c67302a532431b6f0e04115dcb1198997a94e512ad60606719" address="unix:///run/containerd/s/1cb1aebb07a5347675a51f9a3f072829e173310ee82d1010df36a3e31532564e" protocol=ttrpc version=3 May 27 17:48:37.065694 systemd[1]: Started cri-containerd-6a64abb04a6e60c67302a532431b6f0e04115dcb1198997a94e512ad60606719.scope - libcontainer container 6a64abb04a6e60c67302a532431b6f0e04115dcb1198997a94e512ad60606719. May 27 17:48:37.093554 containerd[1720]: time="2025-05-27T17:48:37.093514892Z" level=info msg="StartContainer for \"6a64abb04a6e60c67302a532431b6f0e04115dcb1198997a94e512ad60606719\" returns successfully" May 27 17:48:38.003508 kubelet[3133]: I0527 17:48:38.003438 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-72wxx" podStartSLOduration=6.392527601 podStartE2EDuration="9.003421067s" podCreationTimestamp="2025-05-27 17:48:29 +0000 UTC" firstStartedPulling="2025-05-27 17:48:29.507115964 +0000 UTC m=+6.660596273" lastFinishedPulling="2025-05-27 17:48:32.11800943 +0000 UTC m=+9.271489739" observedRunningTime="2025-05-27 17:48:32.987390915 +0000 UTC m=+10.140871228" watchObservedRunningTime="2025-05-27 17:48:38.003421067 +0000 UTC m=+15.156901378" May 27 17:48:38.030094 sudo[2179]: pam_unix(sudo:session): session closed for user root May 27 17:48:38.133948 sshd[2178]: Connection closed by 10.200.16.10 port 56282 May 27 17:48:38.134694 sshd-session[2176]: pam_unix(sshd:session): session closed for user core May 27 17:48:38.138872 systemd[1]: sshd@6-10.200.8.19:22-10.200.16.10:56282.service: Deactivated successfully. May 27 17:48:38.140883 systemd[1]: session-9.scope: Deactivated successfully. May 27 17:48:38.141305 systemd[1]: session-9.scope: Consumed 3.673s CPU time, 229.2M memory peak. May 27 17:48:38.143120 systemd-logind[1703]: Session 9 logged out. Waiting for processes to exit. May 27 17:48:38.144303 systemd-logind[1703]: Removed session 9. May 27 17:48:42.864996 systemd[1]: Created slice kubepods-besteffort-podbdc5bf27_dd58_4d30_99b0_2fc52570c6ca.slice - libcontainer container kubepods-besteffort-podbdc5bf27_dd58_4d30_99b0_2fc52570c6ca.slice. May 27 17:48:42.948778 kubelet[3133]: I0527 17:48:42.948709 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mncc\" (UniqueName: \"kubernetes.io/projected/bdc5bf27-dd58-4d30-99b0-2fc52570c6ca-kube-api-access-6mncc\") pod \"calico-typha-79f6f9cd9c-765dh\" (UID: \"bdc5bf27-dd58-4d30-99b0-2fc52570c6ca\") " pod="calico-system/calico-typha-79f6f9cd9c-765dh" May 27 17:48:42.948778 kubelet[3133]: I0527 17:48:42.948752 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdc5bf27-dd58-4d30-99b0-2fc52570c6ca-tigera-ca-bundle\") pod \"calico-typha-79f6f9cd9c-765dh\" (UID: \"bdc5bf27-dd58-4d30-99b0-2fc52570c6ca\") " pod="calico-system/calico-typha-79f6f9cd9c-765dh" May 27 17:48:42.949077 kubelet[3133]: I0527 17:48:42.948770 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bdc5bf27-dd58-4d30-99b0-2fc52570c6ca-typha-certs\") pod \"calico-typha-79f6f9cd9c-765dh\" (UID: \"bdc5bf27-dd58-4d30-99b0-2fc52570c6ca\") " pod="calico-system/calico-typha-79f6f9cd9c-765dh" May 27 17:48:43.168092 containerd[1720]: time="2025-05-27T17:48:43.168048120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79f6f9cd9c-765dh,Uid:bdc5bf27-dd58-4d30-99b0-2fc52570c6ca,Namespace:calico-system,Attempt:0,}" May 27 17:48:43.213444 containerd[1720]: time="2025-05-27T17:48:43.213414400Z" level=info msg="connecting to shim 84095ab68d359dc4824e7444e6c30d499340c3afccb8cdb458fec283802221dd" address="unix:///run/containerd/s/d1bba3fd149121ac07a057999864afcfb42a104d47e14415920f298602aa0942" namespace=k8s.io protocol=ttrpc version=3 May 27 17:48:43.240163 systemd[1]: Started cri-containerd-84095ab68d359dc4824e7444e6c30d499340c3afccb8cdb458fec283802221dd.scope - libcontainer container 84095ab68d359dc4824e7444e6c30d499340c3afccb8cdb458fec283802221dd. May 27 17:48:43.253575 systemd[1]: Created slice kubepods-besteffort-poda4b8355c_4089_4348_b86e_13532e546795.slice - libcontainer container kubepods-besteffort-poda4b8355c_4089_4348_b86e_13532e546795.slice. May 27 17:48:43.289842 containerd[1720]: time="2025-05-27T17:48:43.289820623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79f6f9cd9c-765dh,Uid:bdc5bf27-dd58-4d30-99b0-2fc52570c6ca,Namespace:calico-system,Attempt:0,} returns sandbox id \"84095ab68d359dc4824e7444e6c30d499340c3afccb8cdb458fec283802221dd\"" May 27 17:48:43.291466 containerd[1720]: time="2025-05-27T17:48:43.291416030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 17:48:43.352434 kubelet[3133]: I0527 17:48:43.352412 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4b8355c-4089-4348-b86e-13532e546795-lib-modules\") pod \"calico-node-nm6q2\" (UID: \"a4b8355c-4089-4348-b86e-13532e546795\") " pod="calico-system/calico-node-nm6q2" May 27 17:48:43.352512 kubelet[3133]: I0527 17:48:43.352462 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a4b8355c-4089-4348-b86e-13532e546795-node-certs\") pod \"calico-node-nm6q2\" (UID: \"a4b8355c-4089-4348-b86e-13532e546795\") " pod="calico-system/calico-node-nm6q2" May 27 17:48:43.352512 kubelet[3133]: I0527 17:48:43.352483 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a4b8355c-4089-4348-b86e-13532e546795-policysync\") pod \"calico-node-nm6q2\" (UID: \"a4b8355c-4089-4348-b86e-13532e546795\") " pod="calico-system/calico-node-nm6q2" May 27 17:48:43.352512 kubelet[3133]: I0527 17:48:43.352497 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxlbp\" (UniqueName: \"kubernetes.io/projected/a4b8355c-4089-4348-b86e-13532e546795-kube-api-access-nxlbp\") pod \"calico-node-nm6q2\" (UID: \"a4b8355c-4089-4348-b86e-13532e546795\") " pod="calico-system/calico-node-nm6q2" May 27 17:48:43.352594 kubelet[3133]: I0527 17:48:43.352516 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a4b8355c-4089-4348-b86e-13532e546795-cni-bin-dir\") pod \"calico-node-nm6q2\" (UID: \"a4b8355c-4089-4348-b86e-13532e546795\") " pod="calico-system/calico-node-nm6q2" May 27 17:48:43.352594 kubelet[3133]: I0527 17:48:43.352542 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a4b8355c-4089-4348-b86e-13532e546795-var-run-calico\") pod \"calico-node-nm6q2\" (UID: \"a4b8355c-4089-4348-b86e-13532e546795\") " pod="calico-system/calico-node-nm6q2" May 27 17:48:43.352594 kubelet[3133]: I0527 17:48:43.352559 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4b8355c-4089-4348-b86e-13532e546795-tigera-ca-bundle\") pod \"calico-node-nm6q2\" (UID: \"a4b8355c-4089-4348-b86e-13532e546795\") " pod="calico-system/calico-node-nm6q2" May 27 17:48:43.352594 kubelet[3133]: I0527 17:48:43.352576 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a4b8355c-4089-4348-b86e-13532e546795-cni-log-dir\") pod \"calico-node-nm6q2\" (UID: \"a4b8355c-4089-4348-b86e-13532e546795\") " pod="calico-system/calico-node-nm6q2" May 27 17:48:43.352594 kubelet[3133]: I0527 17:48:43.352590 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a4b8355c-4089-4348-b86e-13532e546795-flexvol-driver-host\") pod \"calico-node-nm6q2\" (UID: \"a4b8355c-4089-4348-b86e-13532e546795\") " pod="calico-system/calico-node-nm6q2" May 27 17:48:43.352695 kubelet[3133]: I0527 17:48:43.352604 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a4b8355c-4089-4348-b86e-13532e546795-var-lib-calico\") pod \"calico-node-nm6q2\" (UID: \"a4b8355c-4089-4348-b86e-13532e546795\") " pod="calico-system/calico-node-nm6q2" May 27 17:48:43.352695 kubelet[3133]: I0527 17:48:43.352618 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4b8355c-4089-4348-b86e-13532e546795-xtables-lock\") pod \"calico-node-nm6q2\" (UID: \"a4b8355c-4089-4348-b86e-13532e546795\") " pod="calico-system/calico-node-nm6q2" May 27 17:48:43.352695 kubelet[3133]: I0527 17:48:43.352633 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a4b8355c-4089-4348-b86e-13532e546795-cni-net-dir\") pod \"calico-node-nm6q2\" (UID: \"a4b8355c-4089-4348-b86e-13532e546795\") " pod="calico-system/calico-node-nm6q2" May 27 17:48:43.455592 kubelet[3133]: E0527 17:48:43.455347 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.455592 kubelet[3133]: W0527 17:48:43.455367 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.455592 kubelet[3133]: E0527 17:48:43.455392 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.456756 kubelet[3133]: E0527 17:48:43.456715 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.456756 kubelet[3133]: W0527 17:48:43.456730 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.456756 kubelet[3133]: E0527 17:48:43.456744 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.458664 kubelet[3133]: E0527 17:48:43.458652 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.459562 kubelet[3133]: W0527 17:48:43.458722 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.459562 kubelet[3133]: E0527 17:48:43.458737 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.459814 kubelet[3133]: E0527 17:48:43.459743 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.459814 kubelet[3133]: W0527 17:48:43.459756 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.459814 kubelet[3133]: E0527 17:48:43.459768 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.460040 kubelet[3133]: E0527 17:48:43.459970 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.460040 kubelet[3133]: W0527 17:48:43.459978 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.460040 kubelet[3133]: E0527 17:48:43.459985 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.460136 kubelet[3133]: E0527 17:48:43.460130 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.460225 kubelet[3133]: W0527 17:48:43.460166 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.460225 kubelet[3133]: E0527 17:48:43.460174 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.460370 kubelet[3133]: E0527 17:48:43.460333 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.460370 kubelet[3133]: W0527 17:48:43.460340 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.460370 kubelet[3133]: E0527 17:48:43.460347 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.461740 kubelet[3133]: E0527 17:48:43.461679 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.461740 kubelet[3133]: W0527 17:48:43.461692 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.461740 kubelet[3133]: E0527 17:48:43.461703 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.461964 kubelet[3133]: E0527 17:48:43.461923 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.461964 kubelet[3133]: W0527 17:48:43.461931 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.461964 kubelet[3133]: E0527 17:48:43.461938 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.462144 kubelet[3133]: E0527 17:48:43.462107 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.462144 kubelet[3133]: W0527 17:48:43.462114 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.462144 kubelet[3133]: E0527 17:48:43.462120 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.463209 kubelet[3133]: E0527 17:48:43.463196 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.463423 kubelet[3133]: W0527 17:48:43.463280 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.463423 kubelet[3133]: E0527 17:48:43.463310 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.463604 kubelet[3133]: E0527 17:48:43.463597 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.463690 kubelet[3133]: W0527 17:48:43.463646 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.463690 kubelet[3133]: E0527 17:48:43.463659 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.463859 kubelet[3133]: E0527 17:48:43.463837 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.463859 kubelet[3133]: W0527 17:48:43.463844 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.463859 kubelet[3133]: E0527 17:48:43.463851 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.464232 kubelet[3133]: E0527 17:48:43.464185 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.464232 kubelet[3133]: W0527 17:48:43.464194 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.464232 kubelet[3133]: E0527 17:48:43.464205 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.464461 kubelet[3133]: E0527 17:48:43.464407 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.464461 kubelet[3133]: W0527 17:48:43.464413 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.464461 kubelet[3133]: E0527 17:48:43.464421 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.464697 kubelet[3133]: E0527 17:48:43.464650 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.464697 kubelet[3133]: W0527 17:48:43.464657 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.464697 kubelet[3133]: E0527 17:48:43.464664 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.464887 kubelet[3133]: E0527 17:48:43.464860 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.464887 kubelet[3133]: W0527 17:48:43.464867 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.464887 kubelet[3133]: E0527 17:48:43.464875 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.467580 kubelet[3133]: E0527 17:48:43.467559 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.467580 kubelet[3133]: W0527 17:48:43.467575 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.467671 kubelet[3133]: E0527 17:48:43.467587 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.491350 kubelet[3133]: E0527 17:48:43.491323 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k8l7g" podUID="b3a87a97-33eb-4b72-b6d1-a7277f7e95df" May 27 17:48:43.542211 kubelet[3133]: E0527 17:48:43.542180 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.542211 kubelet[3133]: W0527 17:48:43.542206 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.542330 kubelet[3133]: E0527 17:48:43.542218 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.542330 kubelet[3133]: E0527 17:48:43.542315 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.542330 kubelet[3133]: W0527 17:48:43.542320 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.542330 kubelet[3133]: E0527 17:48:43.542327 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.542427 kubelet[3133]: E0527 17:48:43.542420 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.542427 kubelet[3133]: W0527 17:48:43.542425 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.542468 kubelet[3133]: E0527 17:48:43.542432 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.542656 kubelet[3133]: E0527 17:48:43.542578 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.542656 kubelet[3133]: W0527 17:48:43.542585 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.542656 kubelet[3133]: E0527 17:48:43.542592 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.542867 kubelet[3133]: E0527 17:48:43.542688 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.542867 kubelet[3133]: W0527 17:48:43.542693 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.542867 kubelet[3133]: E0527 17:48:43.542699 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.542867 kubelet[3133]: E0527 17:48:43.542790 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.542867 kubelet[3133]: W0527 17:48:43.542795 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.542867 kubelet[3133]: E0527 17:48:43.542802 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.543204 kubelet[3133]: E0527 17:48:43.542898 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.543204 kubelet[3133]: W0527 17:48:43.542902 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.543204 kubelet[3133]: E0527 17:48:43.542908 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.543204 kubelet[3133]: E0527 17:48:43.542992 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.543204 kubelet[3133]: W0527 17:48:43.542996 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.543204 kubelet[3133]: E0527 17:48:43.543003 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.543204 kubelet[3133]: E0527 17:48:43.543104 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.543204 kubelet[3133]: W0527 17:48:43.543109 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.543204 kubelet[3133]: E0527 17:48:43.543114 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.543204 kubelet[3133]: E0527 17:48:43.543192 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.543864 kubelet[3133]: W0527 17:48:43.543196 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.543864 kubelet[3133]: E0527 17:48:43.543202 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.543864 kubelet[3133]: E0527 17:48:43.543275 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.543864 kubelet[3133]: W0527 17:48:43.543280 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.543864 kubelet[3133]: E0527 17:48:43.543286 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.543864 kubelet[3133]: E0527 17:48:43.543365 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.543864 kubelet[3133]: W0527 17:48:43.543370 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.543864 kubelet[3133]: E0527 17:48:43.543375 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.543864 kubelet[3133]: E0527 17:48:43.543455 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.543864 kubelet[3133]: W0527 17:48:43.543459 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.544860 kubelet[3133]: E0527 17:48:43.543464 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.544860 kubelet[3133]: E0527 17:48:43.543525 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.544860 kubelet[3133]: W0527 17:48:43.543530 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.544860 kubelet[3133]: E0527 17:48:43.543559 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.544860 kubelet[3133]: E0527 17:48:43.543623 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.544860 kubelet[3133]: W0527 17:48:43.543628 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.544860 kubelet[3133]: E0527 17:48:43.543634 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.544860 kubelet[3133]: E0527 17:48:43.543699 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.544860 kubelet[3133]: W0527 17:48:43.543703 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.544860 kubelet[3133]: E0527 17:48:43.543709 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.545072 kubelet[3133]: E0527 17:48:43.543794 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.545072 kubelet[3133]: W0527 17:48:43.543799 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.545072 kubelet[3133]: E0527 17:48:43.543806 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.545072 kubelet[3133]: E0527 17:48:43.543881 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.545072 kubelet[3133]: W0527 17:48:43.543885 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.545072 kubelet[3133]: E0527 17:48:43.543891 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.545072 kubelet[3133]: E0527 17:48:43.543977 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.545072 kubelet[3133]: W0527 17:48:43.543981 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.545072 kubelet[3133]: E0527 17:48:43.543986 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.545072 kubelet[3133]: E0527 17:48:43.544174 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.545259 kubelet[3133]: W0527 17:48:43.544181 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.545259 kubelet[3133]: E0527 17:48:43.544188 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.554403 kubelet[3133]: E0527 17:48:43.554363 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.554403 kubelet[3133]: W0527 17:48:43.554375 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.554403 kubelet[3133]: E0527 17:48:43.554385 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.554626 kubelet[3133]: I0527 17:48:43.554543 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwf2s\" (UniqueName: \"kubernetes.io/projected/b3a87a97-33eb-4b72-b6d1-a7277f7e95df-kube-api-access-fwf2s\") pod \"csi-node-driver-k8l7g\" (UID: \"b3a87a97-33eb-4b72-b6d1-a7277f7e95df\") " pod="calico-system/csi-node-driver-k8l7g" May 27 17:48:43.554699 kubelet[3133]: E0527 17:48:43.554693 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.554756 kubelet[3133]: W0527 17:48:43.554735 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.554832 kubelet[3133]: E0527 17:48:43.554746 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.554832 kubelet[3133]: I0527 17:48:43.554796 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3a87a97-33eb-4b72-b6d1-a7277f7e95df-kubelet-dir\") pod \"csi-node-driver-k8l7g\" (UID: \"b3a87a97-33eb-4b72-b6d1-a7277f7e95df\") " pod="calico-system/csi-node-driver-k8l7g" May 27 17:48:43.554986 kubelet[3133]: E0527 17:48:43.554951 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.554986 kubelet[3133]: W0527 17:48:43.554957 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.554986 kubelet[3133]: E0527 17:48:43.554963 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.555095 kubelet[3133]: I0527 17:48:43.554974 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b3a87a97-33eb-4b72-b6d1-a7277f7e95df-registration-dir\") pod \"csi-node-driver-k8l7g\" (UID: \"b3a87a97-33eb-4b72-b6d1-a7277f7e95df\") " pod="calico-system/csi-node-driver-k8l7g" May 27 17:48:43.555216 kubelet[3133]: E0527 17:48:43.555165 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.555216 kubelet[3133]: W0527 17:48:43.555171 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.555216 kubelet[3133]: E0527 17:48:43.555177 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.555216 kubelet[3133]: I0527 17:48:43.555187 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b3a87a97-33eb-4b72-b6d1-a7277f7e95df-varrun\") pod \"csi-node-driver-k8l7g\" (UID: \"b3a87a97-33eb-4b72-b6d1-a7277f7e95df\") " pod="calico-system/csi-node-driver-k8l7g" May 27 17:48:43.555440 kubelet[3133]: E0527 17:48:43.555389 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.555440 kubelet[3133]: W0527 17:48:43.555397 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.555440 kubelet[3133]: E0527 17:48:43.555406 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.555440 kubelet[3133]: I0527 17:48:43.555430 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b3a87a97-33eb-4b72-b6d1-a7277f7e95df-socket-dir\") pod \"csi-node-driver-k8l7g\" (UID: \"b3a87a97-33eb-4b72-b6d1-a7277f7e95df\") " pod="calico-system/csi-node-driver-k8l7g" May 27 17:48:43.555975 kubelet[3133]: E0527 17:48:43.555956 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.555975 kubelet[3133]: W0527 17:48:43.555974 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.556050 kubelet[3133]: E0527 17:48:43.555987 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.556809 kubelet[3133]: E0527 17:48:43.556716 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.556809 kubelet[3133]: W0527 17:48:43.556733 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.556809 kubelet[3133]: E0527 17:48:43.556745 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.556925 containerd[1720]: time="2025-05-27T17:48:43.556722731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nm6q2,Uid:a4b8355c-4089-4348-b86e-13532e546795,Namespace:calico-system,Attempt:0,}" May 27 17:48:43.557182 kubelet[3133]: E0527 17:48:43.557125 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.557182 kubelet[3133]: W0527 17:48:43.557136 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.557182 kubelet[3133]: E0527 17:48:43.557148 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.557554 kubelet[3133]: E0527 17:48:43.557433 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.557554 kubelet[3133]: W0527 17:48:43.557442 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.557554 kubelet[3133]: E0527 17:48:43.557452 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.557899 kubelet[3133]: E0527 17:48:43.557836 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.557899 kubelet[3133]: W0527 17:48:43.557847 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.557899 kubelet[3133]: E0527 17:48:43.557858 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.558408 kubelet[3133]: E0527 17:48:43.558312 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.558553 kubelet[3133]: W0527 17:48:43.558473 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.558553 kubelet[3133]: E0527 17:48:43.558490 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.559153 kubelet[3133]: E0527 17:48:43.559076 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.559153 kubelet[3133]: W0527 17:48:43.559090 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.559153 kubelet[3133]: E0527 17:48:43.559102 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.559487 kubelet[3133]: E0527 17:48:43.559363 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.559687 kubelet[3133]: W0527 17:48:43.559373 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.559687 kubelet[3133]: E0527 17:48:43.559667 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.560398 kubelet[3133]: E0527 17:48:43.560156 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.560398 kubelet[3133]: W0527 17:48:43.560252 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.560398 kubelet[3133]: E0527 17:48:43.560267 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.560701 kubelet[3133]: E0527 17:48:43.560686 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.563222 kubelet[3133]: W0527 17:48:43.560777 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.563222 kubelet[3133]: E0527 17:48:43.560790 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.595781 containerd[1720]: time="2025-05-27T17:48:43.595742500Z" level=info msg="connecting to shim 6918e5c4cb7939c96720de27e313301bdb1ad0b113fca1ee7f44803514b836c8" address="unix:///run/containerd/s/1504f48beba164bd066888962cd0918e23265c062daa6a7ee5c4cfa71f4adb17" namespace=k8s.io protocol=ttrpc version=3 May 27 17:48:43.616673 systemd[1]: Started cri-containerd-6918e5c4cb7939c96720de27e313301bdb1ad0b113fca1ee7f44803514b836c8.scope - libcontainer container 6918e5c4cb7939c96720de27e313301bdb1ad0b113fca1ee7f44803514b836c8. May 27 17:48:43.634215 containerd[1720]: time="2025-05-27T17:48:43.634195366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nm6q2,Uid:a4b8355c-4089-4348-b86e-13532e546795,Namespace:calico-system,Attempt:0,} returns sandbox id \"6918e5c4cb7939c96720de27e313301bdb1ad0b113fca1ee7f44803514b836c8\"" May 27 17:48:43.655919 kubelet[3133]: E0527 17:48:43.655903 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.655919 kubelet[3133]: W0527 17:48:43.655915 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.656050 kubelet[3133]: E0527 17:48:43.655925 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.656050 kubelet[3133]: E0527 17:48:43.656040 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.656050 kubelet[3133]: W0527 17:48:43.656046 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.656167 kubelet[3133]: E0527 17:48:43.656053 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.656193 kubelet[3133]: E0527 17:48:43.656167 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.656193 kubelet[3133]: W0527 17:48:43.656175 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.656193 kubelet[3133]: E0527 17:48:43.656183 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.656285 kubelet[3133]: E0527 17:48:43.656274 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.656285 kubelet[3133]: W0527 17:48:43.656282 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.656337 kubelet[3133]: E0527 17:48:43.656289 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.656384 kubelet[3133]: E0527 17:48:43.656376 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.656384 kubelet[3133]: W0527 17:48:43.656382 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.656425 kubelet[3133]: E0527 17:48:43.656388 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.656554 kubelet[3133]: E0527 17:48:43.656546 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.656580 kubelet[3133]: W0527 17:48:43.656554 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.656580 kubelet[3133]: E0527 17:48:43.656561 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.656669 kubelet[3133]: E0527 17:48:43.656661 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.656669 kubelet[3133]: W0527 17:48:43.656667 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.656712 kubelet[3133]: E0527 17:48:43.656673 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.656786 kubelet[3133]: E0527 17:48:43.656781 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.656786 kubelet[3133]: W0527 17:48:43.656786 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.656849 kubelet[3133]: E0527 17:48:43.656792 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.656930 kubelet[3133]: E0527 17:48:43.656919 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.656930 kubelet[3133]: W0527 17:48:43.656928 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.656987 kubelet[3133]: E0527 17:48:43.656935 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.657046 kubelet[3133]: E0527 17:48:43.657036 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.657046 kubelet[3133]: W0527 17:48:43.657043 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.657111 kubelet[3133]: E0527 17:48:43.657050 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.657143 kubelet[3133]: E0527 17:48:43.657130 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.657143 kubelet[3133]: W0527 17:48:43.657134 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.657143 kubelet[3133]: E0527 17:48:43.657139 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.657233 kubelet[3133]: E0527 17:48:43.657225 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.657233 kubelet[3133]: W0527 17:48:43.657231 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.657300 kubelet[3133]: E0527 17:48:43.657237 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.657323 kubelet[3133]: E0527 17:48:43.657317 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.657323 kubelet[3133]: W0527 17:48:43.657320 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.657385 kubelet[3133]: E0527 17:48:43.657326 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.657414 kubelet[3133]: E0527 17:48:43.657408 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.657414 kubelet[3133]: W0527 17:48:43.657412 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.657470 kubelet[3133]: E0527 17:48:43.657418 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.657508 kubelet[3133]: E0527 17:48:43.657495 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.657508 kubelet[3133]: W0527 17:48:43.657501 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.657508 kubelet[3133]: E0527 17:48:43.657506 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.657616 kubelet[3133]: E0527 17:48:43.657605 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.657616 kubelet[3133]: W0527 17:48:43.657610 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.657665 kubelet[3133]: E0527 17:48:43.657616 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.657732 kubelet[3133]: E0527 17:48:43.657722 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.657732 kubelet[3133]: W0527 17:48:43.657730 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.657783 kubelet[3133]: E0527 17:48:43.657737 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.657831 kubelet[3133]: E0527 17:48:43.657822 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.657831 kubelet[3133]: W0527 17:48:43.657828 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.657879 kubelet[3133]: E0527 17:48:43.657834 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.657938 kubelet[3133]: E0527 17:48:43.657925 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.657938 kubelet[3133]: W0527 17:48:43.657932 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.657981 kubelet[3133]: E0527 17:48:43.657939 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.658075 kubelet[3133]: E0527 17:48:43.658063 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.658075 kubelet[3133]: W0527 17:48:43.658070 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.658133 kubelet[3133]: E0527 17:48:43.658078 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.658191 kubelet[3133]: E0527 17:48:43.658183 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.658191 kubelet[3133]: W0527 17:48:43.658190 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.658253 kubelet[3133]: E0527 17:48:43.658196 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.658287 kubelet[3133]: E0527 17:48:43.658278 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.658287 kubelet[3133]: W0527 17:48:43.658284 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.658344 kubelet[3133]: E0527 17:48:43.658290 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.658382 kubelet[3133]: E0527 17:48:43.658375 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.658382 kubelet[3133]: W0527 17:48:43.658381 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.658432 kubelet[3133]: E0527 17:48:43.658387 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.658474 kubelet[3133]: E0527 17:48:43.658467 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.658495 kubelet[3133]: W0527 17:48:43.658473 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.658495 kubelet[3133]: E0527 17:48:43.658479 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.658652 kubelet[3133]: E0527 17:48:43.658634 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.658652 kubelet[3133]: W0527 17:48:43.658643 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.658652 kubelet[3133]: E0527 17:48:43.658649 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:43.673234 kubelet[3133]: E0527 17:48:43.673220 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:43.673234 kubelet[3133]: W0527 17:48:43.673231 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:43.673318 kubelet[3133]: E0527 17:48:43.673241 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:44.833014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount233212866.mount: Deactivated successfully. May 27 17:48:44.930679 kubelet[3133]: E0527 17:48:44.930332 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k8l7g" podUID="b3a87a97-33eb-4b72-b6d1-a7277f7e95df" May 27 17:48:46.110021 containerd[1720]: time="2025-05-27T17:48:46.109986481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:46.112167 containerd[1720]: time="2025-05-27T17:48:46.112135224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 27 17:48:46.114276 containerd[1720]: time="2025-05-27T17:48:46.114249187Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:46.117226 containerd[1720]: time="2025-05-27T17:48:46.117192794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:46.117753 containerd[1720]: time="2025-05-27T17:48:46.117690258Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.826249767s" May 27 17:48:46.117753 containerd[1720]: time="2025-05-27T17:48:46.117714239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 27 17:48:46.118717 containerd[1720]: time="2025-05-27T17:48:46.118580506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 17:48:46.136841 containerd[1720]: time="2025-05-27T17:48:46.136812393Z" level=info msg="CreateContainer within sandbox \"84095ab68d359dc4824e7444e6c30d499340c3afccb8cdb458fec283802221dd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 17:48:46.163692 containerd[1720]: time="2025-05-27T17:48:46.162898819Z" level=info msg="Container 797a17fe6101d7a8e21a98bad0e4e4b4911f57e9243c3efd11a1ff76efcc9b34: CDI devices from CRI Config.CDIDevices: []" May 27 17:48:46.176187 containerd[1720]: time="2025-05-27T17:48:46.176165448Z" level=info msg="CreateContainer within sandbox \"84095ab68d359dc4824e7444e6c30d499340c3afccb8cdb458fec283802221dd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"797a17fe6101d7a8e21a98bad0e4e4b4911f57e9243c3efd11a1ff76efcc9b34\"" May 27 17:48:46.176497 containerd[1720]: time="2025-05-27T17:48:46.176457302Z" level=info msg="StartContainer for \"797a17fe6101d7a8e21a98bad0e4e4b4911f57e9243c3efd11a1ff76efcc9b34\"" May 27 17:48:46.177409 containerd[1720]: time="2025-05-27T17:48:46.177381093Z" level=info msg="connecting to shim 797a17fe6101d7a8e21a98bad0e4e4b4911f57e9243c3efd11a1ff76efcc9b34" address="unix:///run/containerd/s/d1bba3fd149121ac07a057999864afcfb42a104d47e14415920f298602aa0942" protocol=ttrpc version=3 May 27 17:48:46.197690 systemd[1]: Started cri-containerd-797a17fe6101d7a8e21a98bad0e4e4b4911f57e9243c3efd11a1ff76efcc9b34.scope - libcontainer container 797a17fe6101d7a8e21a98bad0e4e4b4911f57e9243c3efd11a1ff76efcc9b34. May 27 17:48:46.236783 containerd[1720]: time="2025-05-27T17:48:46.236726814Z" level=info msg="StartContainer for \"797a17fe6101d7a8e21a98bad0e4e4b4911f57e9243c3efd11a1ff76efcc9b34\" returns successfully" May 27 17:48:46.930529 kubelet[3133]: E0527 17:48:46.929779 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k8l7g" podUID="b3a87a97-33eb-4b72-b6d1-a7277f7e95df" May 27 17:48:47.018221 kubelet[3133]: I0527 17:48:47.017965 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-79f6f9cd9c-765dh" podStartSLOduration=2.190387754 podStartE2EDuration="5.017842136s" podCreationTimestamp="2025-05-27 17:48:42 +0000 UTC" firstStartedPulling="2025-05-27 17:48:43.290928764 +0000 UTC m=+20.444409078" lastFinishedPulling="2025-05-27 17:48:46.11838315 +0000 UTC m=+23.271863460" observedRunningTime="2025-05-27 17:48:47.017724707 +0000 UTC m=+24.171205019" watchObservedRunningTime="2025-05-27 17:48:47.017842136 +0000 UTC m=+24.171322445" May 27 17:48:47.070148 kubelet[3133]: E0527 17:48:47.070125 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.070148 kubelet[3133]: W0527 17:48:47.070145 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.070278 kubelet[3133]: E0527 17:48:47.070161 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.070278 kubelet[3133]: E0527 17:48:47.070266 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.070278 kubelet[3133]: W0527 17:48:47.070271 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.070278 kubelet[3133]: E0527 17:48:47.070278 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.070391 kubelet[3133]: E0527 17:48:47.070354 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.070391 kubelet[3133]: W0527 17:48:47.070358 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.070391 kubelet[3133]: E0527 17:48:47.070364 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.070489 kubelet[3133]: E0527 17:48:47.070474 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.070489 kubelet[3133]: W0527 17:48:47.070481 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.070489 kubelet[3133]: E0527 17:48:47.070487 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.070656 kubelet[3133]: E0527 17:48:47.070639 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.070656 kubelet[3133]: W0527 17:48:47.070655 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.070712 kubelet[3133]: E0527 17:48:47.070661 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.070752 kubelet[3133]: E0527 17:48:47.070748 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.070772 kubelet[3133]: W0527 17:48:47.070759 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.070772 kubelet[3133]: E0527 17:48:47.070764 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.070861 kubelet[3133]: E0527 17:48:47.070840 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.070890 kubelet[3133]: W0527 17:48:47.070865 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.070890 kubelet[3133]: E0527 17:48:47.070871 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.071030 kubelet[3133]: E0527 17:48:47.071020 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.071030 kubelet[3133]: W0527 17:48:47.071027 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.071076 kubelet[3133]: E0527 17:48:47.071033 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.071154 kubelet[3133]: E0527 17:48:47.071132 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.071154 kubelet[3133]: W0527 17:48:47.071152 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.071235 kubelet[3133]: E0527 17:48:47.071158 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.071235 kubelet[3133]: E0527 17:48:47.071226 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.071235 kubelet[3133]: W0527 17:48:47.071230 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.071292 kubelet[3133]: E0527 17:48:47.071234 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.071316 kubelet[3133]: E0527 17:48:47.071303 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.071316 kubelet[3133]: W0527 17:48:47.071307 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.071316 kubelet[3133]: E0527 17:48:47.071312 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.071375 kubelet[3133]: E0527 17:48:47.071373 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.071375 kubelet[3133]: W0527 17:48:47.071377 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.071438 kubelet[3133]: E0527 17:48:47.071382 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.071471 kubelet[3133]: E0527 17:48:47.071450 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.071471 kubelet[3133]: W0527 17:48:47.071454 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.071471 kubelet[3133]: E0527 17:48:47.071460 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.071560 kubelet[3133]: E0527 17:48:47.071528 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.071560 kubelet[3133]: W0527 17:48:47.071545 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.071560 kubelet[3133]: E0527 17:48:47.071551 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.071638 kubelet[3133]: E0527 17:48:47.071621 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.071638 kubelet[3133]: W0527 17:48:47.071625 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.071638 kubelet[3133]: E0527 17:48:47.071630 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.079956 kubelet[3133]: E0527 17:48:47.079938 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.079956 kubelet[3133]: W0527 17:48:47.079951 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.081036 kubelet[3133]: E0527 17:48:47.079963 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.081036 kubelet[3133]: E0527 17:48:47.080086 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.081036 kubelet[3133]: W0527 17:48:47.080092 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.081036 kubelet[3133]: E0527 17:48:47.080099 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.081036 kubelet[3133]: E0527 17:48:47.080216 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.081036 kubelet[3133]: W0527 17:48:47.080234 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.081036 kubelet[3133]: E0527 17:48:47.080239 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.081036 kubelet[3133]: E0527 17:48:47.080363 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.081036 kubelet[3133]: W0527 17:48:47.080378 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.081036 kubelet[3133]: E0527 17:48:47.080384 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.081332 kubelet[3133]: E0527 17:48:47.080495 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.081332 kubelet[3133]: W0527 17:48:47.080509 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.081332 kubelet[3133]: E0527 17:48:47.080513 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.081332 kubelet[3133]: E0527 17:48:47.080630 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.081332 kubelet[3133]: W0527 17:48:47.080648 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.081332 kubelet[3133]: E0527 17:48:47.080654 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.081332 kubelet[3133]: E0527 17:48:47.080783 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.081332 kubelet[3133]: W0527 17:48:47.080806 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.081332 kubelet[3133]: E0527 17:48:47.080812 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.081332 kubelet[3133]: E0527 17:48:47.080967 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.081772 kubelet[3133]: W0527 17:48:47.080972 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.081772 kubelet[3133]: E0527 17:48:47.080978 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.081772 kubelet[3133]: E0527 17:48:47.081114 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.081772 kubelet[3133]: W0527 17:48:47.081119 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.081772 kubelet[3133]: E0527 17:48:47.081131 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.081772 kubelet[3133]: E0527 17:48:47.081204 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.081772 kubelet[3133]: W0527 17:48:47.081209 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.081772 kubelet[3133]: E0527 17:48:47.081215 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.081772 kubelet[3133]: E0527 17:48:47.081319 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.081772 kubelet[3133]: W0527 17:48:47.081323 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.081960 kubelet[3133]: E0527 17:48:47.081328 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.081960 kubelet[3133]: E0527 17:48:47.081439 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.081960 kubelet[3133]: W0527 17:48:47.081444 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.081960 kubelet[3133]: E0527 17:48:47.081450 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.081960 kubelet[3133]: E0527 17:48:47.081558 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.081960 kubelet[3133]: W0527 17:48:47.081562 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.081960 kubelet[3133]: E0527 17:48:47.081567 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.081960 kubelet[3133]: E0527 17:48:47.081681 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.081960 kubelet[3133]: W0527 17:48:47.081700 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.081960 kubelet[3133]: E0527 17:48:47.081706 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.082147 kubelet[3133]: E0527 17:48:47.081782 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.082147 kubelet[3133]: W0527 17:48:47.081786 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.082147 kubelet[3133]: E0527 17:48:47.081791 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.082147 kubelet[3133]: E0527 17:48:47.081889 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.082147 kubelet[3133]: W0527 17:48:47.081893 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.082147 kubelet[3133]: E0527 17:48:47.081899 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.082147 kubelet[3133]: E0527 17:48:47.082092 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.082147 kubelet[3133]: W0527 17:48:47.082097 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.082147 kubelet[3133]: E0527 17:48:47.082102 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.082341 kubelet[3133]: E0527 17:48:47.082324 3133 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:48:47.082368 kubelet[3133]: W0527 17:48:47.082346 3133 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:48:47.082368 kubelet[3133]: E0527 17:48:47.082353 3133 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:48:47.347684 containerd[1720]: time="2025-05-27T17:48:47.347609324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:47.349666 containerd[1720]: time="2025-05-27T17:48:47.349631285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 27 17:48:47.352544 containerd[1720]: time="2025-05-27T17:48:47.351902747Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:47.356471 containerd[1720]: time="2025-05-27T17:48:47.356433378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:47.356819 containerd[1720]: time="2025-05-27T17:48:47.356723748Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.238114241s" May 27 17:48:47.356819 containerd[1720]: time="2025-05-27T17:48:47.356750339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 27 17:48:47.362443 containerd[1720]: time="2025-05-27T17:48:47.362420555Z" level=info msg="CreateContainer within sandbox \"6918e5c4cb7939c96720de27e313301bdb1ad0b113fca1ee7f44803514b836c8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 17:48:47.377547 containerd[1720]: time="2025-05-27T17:48:47.376698746Z" level=info msg="Container 0aace08527edee9ade36c15be0a147475f48bdb6e0800f96d783b0b77031465b: CDI devices from CRI Config.CDIDevices: []" May 27 17:48:47.392287 containerd[1720]: time="2025-05-27T17:48:47.392265555Z" level=info msg="CreateContainer within sandbox \"6918e5c4cb7939c96720de27e313301bdb1ad0b113fca1ee7f44803514b836c8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0aace08527edee9ade36c15be0a147475f48bdb6e0800f96d783b0b77031465b\"" May 27 17:48:47.392728 containerd[1720]: time="2025-05-27T17:48:47.392666762Z" level=info msg="StartContainer for \"0aace08527edee9ade36c15be0a147475f48bdb6e0800f96d783b0b77031465b\"" May 27 17:48:47.394037 containerd[1720]: time="2025-05-27T17:48:47.394018359Z" level=info msg="connecting to shim 0aace08527edee9ade36c15be0a147475f48bdb6e0800f96d783b0b77031465b" address="unix:///run/containerd/s/1504f48beba164bd066888962cd0918e23265c062daa6a7ee5c4cfa71f4adb17" protocol=ttrpc version=3 May 27 17:48:47.409699 systemd[1]: Started cri-containerd-0aace08527edee9ade36c15be0a147475f48bdb6e0800f96d783b0b77031465b.scope - libcontainer container 0aace08527edee9ade36c15be0a147475f48bdb6e0800f96d783b0b77031465b. May 27 17:48:47.443709 containerd[1720]: time="2025-05-27T17:48:47.443676527Z" level=info msg="StartContainer for \"0aace08527edee9ade36c15be0a147475f48bdb6e0800f96d783b0b77031465b\" returns successfully" May 27 17:48:47.445759 systemd[1]: cri-containerd-0aace08527edee9ade36c15be0a147475f48bdb6e0800f96d783b0b77031465b.scope: Deactivated successfully. May 27 17:48:47.447786 containerd[1720]: time="2025-05-27T17:48:47.447737615Z" level=info msg="received exit event container_id:\"0aace08527edee9ade36c15be0a147475f48bdb6e0800f96d783b0b77031465b\" id:\"0aace08527edee9ade36c15be0a147475f48bdb6e0800f96d783b0b77031465b\" pid:3862 exited_at:{seconds:1748368127 nanos:447476975}" May 27 17:48:47.447995 containerd[1720]: time="2025-05-27T17:48:47.447978688Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0aace08527edee9ade36c15be0a147475f48bdb6e0800f96d783b0b77031465b\" id:\"0aace08527edee9ade36c15be0a147475f48bdb6e0800f96d783b0b77031465b\" pid:3862 exited_at:{seconds:1748368127 nanos:447476975}" May 27 17:48:47.461187 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0aace08527edee9ade36c15be0a147475f48bdb6e0800f96d783b0b77031465b-rootfs.mount: Deactivated successfully. May 27 17:48:48.010900 kubelet[3133]: I0527 17:48:48.010881 3133 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:48:48.930453 kubelet[3133]: E0527 17:48:48.930090 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k8l7g" podUID="b3a87a97-33eb-4b72-b6d1-a7277f7e95df" May 27 17:48:49.015219 containerd[1720]: time="2025-05-27T17:48:49.015155552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 17:48:50.930457 kubelet[3133]: E0527 17:48:50.930421 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k8l7g" podUID="b3a87a97-33eb-4b72-b6d1-a7277f7e95df" May 27 17:48:51.439944 containerd[1720]: time="2025-05-27T17:48:51.439906893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:51.442225 containerd[1720]: time="2025-05-27T17:48:51.442193568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 27 17:48:51.446046 containerd[1720]: time="2025-05-27T17:48:51.446010153Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:51.449306 containerd[1720]: time="2025-05-27T17:48:51.449267632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:51.449736 containerd[1720]: time="2025-05-27T17:48:51.449651066Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 2.434439873s" May 27 17:48:51.449736 containerd[1720]: time="2025-05-27T17:48:51.449675322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 27 17:48:51.456137 containerd[1720]: time="2025-05-27T17:48:51.456115549Z" level=info msg="CreateContainer within sandbox \"6918e5c4cb7939c96720de27e313301bdb1ad0b113fca1ee7f44803514b836c8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 17:48:51.472657 containerd[1720]: time="2025-05-27T17:48:51.471560372Z" level=info msg="Container 14b7d8f43499444432c681a4a6b0d188d99ad9a05d20e7e4979231a535475c56: CDI devices from CRI Config.CDIDevices: []" May 27 17:48:51.485861 containerd[1720]: time="2025-05-27T17:48:51.485835052Z" level=info msg="CreateContainer within sandbox \"6918e5c4cb7939c96720de27e313301bdb1ad0b113fca1ee7f44803514b836c8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"14b7d8f43499444432c681a4a6b0d188d99ad9a05d20e7e4979231a535475c56\"" May 27 17:48:51.486599 containerd[1720]: time="2025-05-27T17:48:51.486362523Z" level=info msg="StartContainer for \"14b7d8f43499444432c681a4a6b0d188d99ad9a05d20e7e4979231a535475c56\"" May 27 17:48:51.487893 containerd[1720]: time="2025-05-27T17:48:51.487857530Z" level=info msg="connecting to shim 14b7d8f43499444432c681a4a6b0d188d99ad9a05d20e7e4979231a535475c56" address="unix:///run/containerd/s/1504f48beba164bd066888962cd0918e23265c062daa6a7ee5c4cfa71f4adb17" protocol=ttrpc version=3 May 27 17:48:51.508709 systemd[1]: Started cri-containerd-14b7d8f43499444432c681a4a6b0d188d99ad9a05d20e7e4979231a535475c56.scope - libcontainer container 14b7d8f43499444432c681a4a6b0d188d99ad9a05d20e7e4979231a535475c56. May 27 17:48:51.542381 containerd[1720]: time="2025-05-27T17:48:51.542294478Z" level=info msg="StartContainer for \"14b7d8f43499444432c681a4a6b0d188d99ad9a05d20e7e4979231a535475c56\" returns successfully" May 27 17:48:52.732868 containerd[1720]: time="2025-05-27T17:48:52.732819886Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:48:52.734390 systemd[1]: cri-containerd-14b7d8f43499444432c681a4a6b0d188d99ad9a05d20e7e4979231a535475c56.scope: Deactivated successfully. May 27 17:48:52.734643 systemd[1]: cri-containerd-14b7d8f43499444432c681a4a6b0d188d99ad9a05d20e7e4979231a535475c56.scope: Consumed 357ms CPU time, 190.5M memory peak, 170.9M written to disk. May 27 17:48:52.736480 containerd[1720]: time="2025-05-27T17:48:52.736452549Z" level=info msg="received exit event container_id:\"14b7d8f43499444432c681a4a6b0d188d99ad9a05d20e7e4979231a535475c56\" id:\"14b7d8f43499444432c681a4a6b0d188d99ad9a05d20e7e4979231a535475c56\" pid:3919 exited_at:{seconds:1748368132 nanos:736273456}" May 27 17:48:52.736643 containerd[1720]: time="2025-05-27T17:48:52.736618296Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14b7d8f43499444432c681a4a6b0d188d99ad9a05d20e7e4979231a535475c56\" id:\"14b7d8f43499444432c681a4a6b0d188d99ad9a05d20e7e4979231a535475c56\" pid:3919 exited_at:{seconds:1748368132 nanos:736273456}" May 27 17:48:52.752405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14b7d8f43499444432c681a4a6b0d188d99ad9a05d20e7e4979231a535475c56-rootfs.mount: Deactivated successfully. May 27 17:48:52.795548 kubelet[3133]: I0527 17:48:52.795493 3133 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 17:48:53.105449 systemd[1]: Created slice kubepods-burstable-podf60ba011_2c39_4ab8_98b6_55b49dcacdc9.slice - libcontainer container kubepods-burstable-podf60ba011_2c39_4ab8_98b6_55b49dcacdc9.slice. May 27 17:48:53.207961 containerd[1720]: time="2025-05-27T17:48:53.207450055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k8l7g,Uid:b3a87a97-33eb-4b72-b6d1-a7277f7e95df,Namespace:calico-system,Attempt:0,}" May 27 17:48:53.208010 kubelet[3133]: I0527 17:48:53.116153 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/009c619a-fd70-49b4-823f-4da0d1b1b32b-config-volume\") pod \"coredns-674b8bbfcf-m4kcv\" (UID: \"009c619a-fd70-49b4-823f-4da0d1b1b32b\") " pod="kube-system/coredns-674b8bbfcf-m4kcv" May 27 17:48:53.208010 kubelet[3133]: I0527 17:48:53.116182 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4vqv\" (UniqueName: \"kubernetes.io/projected/009c619a-fd70-49b4-823f-4da0d1b1b32b-kube-api-access-j4vqv\") pod \"coredns-674b8bbfcf-m4kcv\" (UID: \"009c619a-fd70-49b4-823f-4da0d1b1b32b\") " pod="kube-system/coredns-674b8bbfcf-m4kcv" May 27 17:48:53.208010 kubelet[3133]: I0527 17:48:53.116202 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q547d\" (UniqueName: \"kubernetes.io/projected/f60ba011-2c39-4ab8-98b6-55b49dcacdc9-kube-api-access-q547d\") pod \"coredns-674b8bbfcf-g2n5z\" (UID: \"f60ba011-2c39-4ab8-98b6-55b49dcacdc9\") " pod="kube-system/coredns-674b8bbfcf-g2n5z" May 27 17:48:53.208010 kubelet[3133]: I0527 17:48:53.116244 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f60ba011-2c39-4ab8-98b6-55b49dcacdc9-config-volume\") pod \"coredns-674b8bbfcf-g2n5z\" (UID: \"f60ba011-2c39-4ab8-98b6-55b49dcacdc9\") " pod="kube-system/coredns-674b8bbfcf-g2n5z" May 27 17:48:53.115552 systemd[1]: Created slice kubepods-burstable-pod009c619a_fd70_49b4_823f_4da0d1b1b32b.slice - libcontainer container kubepods-burstable-pod009c619a_fd70_49b4_823f_4da0d1b1b32b.slice. May 27 17:48:53.132761 systemd[1]: Created slice kubepods-besteffort-podb3a87a97_33eb_4b72_b6d1_a7277f7e95df.slice - libcontainer container kubepods-besteffort-podb3a87a97_33eb_4b72_b6d1_a7277f7e95df.slice. May 27 17:48:53.418280 kubelet[3133]: I0527 17:48:53.418249 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h2m9\" (UniqueName: \"kubernetes.io/projected/fdd8d26e-edda-4b13-b4da-f6328fc5d832-kube-api-access-7h2m9\") pod \"calico-apiserver-5645b5c46d-trkx5\" (UID: \"fdd8d26e-edda-4b13-b4da-f6328fc5d832\") " pod="calico-apiserver/calico-apiserver-5645b5c46d-trkx5" May 27 17:48:53.418280 kubelet[3133]: I0527 17:48:53.418279 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fdd8d26e-edda-4b13-b4da-f6328fc5d832-calico-apiserver-certs\") pod \"calico-apiserver-5645b5c46d-trkx5\" (UID: \"fdd8d26e-edda-4b13-b4da-f6328fc5d832\") " pod="calico-apiserver/calico-apiserver-5645b5c46d-trkx5" May 27 17:48:53.646623 kubelet[3133]: E0527 17:48:53.518769 3133 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: object "calico-apiserver"/"calico-apiserver-certs" not registered May 27 17:48:53.646623 kubelet[3133]: E0527 17:48:53.518837 3133 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fdd8d26e-edda-4b13-b4da-f6328fc5d832-calico-apiserver-certs podName:fdd8d26e-edda-4b13-b4da-f6328fc5d832 nodeName:}" failed. No retries permitted until 2025-05-27 17:48:54.018808511 +0000 UTC m=+31.172288820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/fdd8d26e-edda-4b13-b4da-f6328fc5d832-calico-apiserver-certs") pod "calico-apiserver-5645b5c46d-trkx5" (UID: "fdd8d26e-edda-4b13-b4da-f6328fc5d832") : object "calico-apiserver"/"calico-apiserver-certs" not registered May 27 17:48:53.646623 kubelet[3133]: E0527 17:48:53.522805 3133 projected.go:289] Couldn't get configMap calico-apiserver/kube-root-ca.crt: object "calico-apiserver"/"kube-root-ca.crt" not registered May 27 17:48:53.646623 kubelet[3133]: E0527 17:48:53.522833 3133 projected.go:194] Error preparing data for projected volume kube-api-access-7h2m9 for pod calico-apiserver/calico-apiserver-5645b5c46d-trkx5: object "calico-apiserver"/"kube-root-ca.crt" not registered May 27 17:48:53.646623 kubelet[3133]: E0527 17:48:53.522889 3133 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fdd8d26e-edda-4b13-b4da-f6328fc5d832-kube-api-access-7h2m9 podName:fdd8d26e-edda-4b13-b4da-f6328fc5d832 nodeName:}" failed. No retries permitted until 2025-05-27 17:48:54.022861841 +0000 UTC m=+31.176342152 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7h2m9" (UniqueName: "kubernetes.io/projected/fdd8d26e-edda-4b13-b4da-f6328fc5d832-kube-api-access-7h2m9") pod "calico-apiserver-5645b5c46d-trkx5" (UID: "fdd8d26e-edda-4b13-b4da-f6328fc5d832") : object "calico-apiserver"/"kube-root-ca.crt" not registered May 27 17:48:53.647638 containerd[1720]: time="2025-05-27T17:48:53.647599827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m4kcv,Uid:009c619a-fd70-49b4-823f-4da0d1b1b32b,Namespace:kube-system,Attempt:0,}" May 27 17:48:53.647894 containerd[1720]: time="2025-05-27T17:48:53.647599855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g2n5z,Uid:f60ba011-2c39-4ab8-98b6-55b49dcacdc9,Namespace:kube-system,Attempt:0,}" May 27 17:48:53.655221 systemd[1]: Created slice kubepods-besteffort-podfdd8d26e_edda_4b13_b4da_f6328fc5d832.slice - libcontainer container kubepods-besteffort-podfdd8d26e_edda_4b13_b4da_f6328fc5d832.slice. May 27 17:48:53.689198 systemd[1]: Created slice kubepods-besteffort-podd4781aaa_4b71_483f_ac78_f18b8dcfe307.slice - libcontainer container kubepods-besteffort-podd4781aaa_4b71_483f_ac78_f18b8dcfe307.slice. May 27 17:48:53.703058 systemd[1]: Created slice kubepods-besteffort-pod809f7cb6_fbee_4102_b430_229c080e87f0.slice - libcontainer container kubepods-besteffort-pod809f7cb6_fbee_4102_b430_229c080e87f0.slice. May 27 17:48:53.712802 systemd[1]: Created slice kubepods-besteffort-pod8f7bd5e7_15e4_4a39_ab15_f327a8700564.slice - libcontainer container kubepods-besteffort-pod8f7bd5e7_15e4_4a39_ab15_f327a8700564.slice. May 27 17:48:53.719798 kubelet[3133]: I0527 17:48:53.719702 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/809f7cb6-fbee-4102-b430-229c080e87f0-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-rdk7t\" (UID: \"809f7cb6-fbee-4102-b430-229c080e87f0\") " pod="calico-system/goldmane-78d55f7ddc-rdk7t" May 27 17:48:53.720695 kubelet[3133]: I0527 17:48:53.720678 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e108e34-a1be-4cee-ab9c-bbcc8960c586-tigera-ca-bundle\") pod \"calico-kube-controllers-f44587b47-pwrq8\" (UID: \"3e108e34-a1be-4cee-ab9c-bbcc8960c586\") " pod="calico-system/calico-kube-controllers-f44587b47-pwrq8" May 27 17:48:53.721234 kubelet[3133]: I0527 17:48:53.721146 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f7bd5e7-15e4-4a39-ab15-f327a8700564-whisker-ca-bundle\") pod \"whisker-7cd9c874f6-xm6dw\" (UID: \"8f7bd5e7-15e4-4a39-ab15-f327a8700564\") " pod="calico-system/whisker-7cd9c874f6-xm6dw" May 27 17:48:53.722198 kubelet[3133]: I0527 17:48:53.721949 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jmj6\" (UniqueName: \"kubernetes.io/projected/8f7bd5e7-15e4-4a39-ab15-f327a8700564-kube-api-access-5jmj6\") pod \"whisker-7cd9c874f6-xm6dw\" (UID: \"8f7bd5e7-15e4-4a39-ab15-f327a8700564\") " pod="calico-system/whisker-7cd9c874f6-xm6dw" May 27 17:48:53.722216 systemd[1]: Created slice kubepods-besteffort-pod3e108e34_a1be_4cee_ab9c_bbcc8960c586.slice - libcontainer container kubepods-besteffort-pod3e108e34_a1be_4cee_ab9c_bbcc8960c586.slice. May 27 17:48:53.722480 kubelet[3133]: I0527 17:48:53.722458 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/809f7cb6-fbee-4102-b430-229c080e87f0-config\") pod \"goldmane-78d55f7ddc-rdk7t\" (UID: \"809f7cb6-fbee-4102-b430-229c080e87f0\") " pod="calico-system/goldmane-78d55f7ddc-rdk7t" May 27 17:48:53.722555 kubelet[3133]: I0527 17:48:53.722500 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/809f7cb6-fbee-4102-b430-229c080e87f0-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-rdk7t\" (UID: \"809f7cb6-fbee-4102-b430-229c080e87f0\") " pod="calico-system/goldmane-78d55f7ddc-rdk7t" May 27 17:48:53.722905 kubelet[3133]: I0527 17:48:53.722889 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d4781aaa-4b71-483f-ac78-f18b8dcfe307-calico-apiserver-certs\") pod \"calico-apiserver-5645b5c46d-mbgfx\" (UID: \"d4781aaa-4b71-483f-ac78-f18b8dcfe307\") " pod="calico-apiserver/calico-apiserver-5645b5c46d-mbgfx" May 27 17:48:53.722963 kubelet[3133]: I0527 17:48:53.722921 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mw9v\" (UniqueName: \"kubernetes.io/projected/d4781aaa-4b71-483f-ac78-f18b8dcfe307-kube-api-access-8mw9v\") pod \"calico-apiserver-5645b5c46d-mbgfx\" (UID: \"d4781aaa-4b71-483f-ac78-f18b8dcfe307\") " pod="calico-apiserver/calico-apiserver-5645b5c46d-mbgfx" May 27 17:48:53.722963 kubelet[3133]: I0527 17:48:53.722955 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9hx2\" (UniqueName: \"kubernetes.io/projected/809f7cb6-fbee-4102-b430-229c080e87f0-kube-api-access-h9hx2\") pod \"goldmane-78d55f7ddc-rdk7t\" (UID: \"809f7cb6-fbee-4102-b430-229c080e87f0\") " pod="calico-system/goldmane-78d55f7ddc-rdk7t" May 27 17:48:53.723015 kubelet[3133]: I0527 17:48:53.722981 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8f7bd5e7-15e4-4a39-ab15-f327a8700564-whisker-backend-key-pair\") pod \"whisker-7cd9c874f6-xm6dw\" (UID: \"8f7bd5e7-15e4-4a39-ab15-f327a8700564\") " pod="calico-system/whisker-7cd9c874f6-xm6dw" May 27 17:48:53.723015 kubelet[3133]: I0527 17:48:53.722999 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdnn9\" (UniqueName: \"kubernetes.io/projected/3e108e34-a1be-4cee-ab9c-bbcc8960c586-kube-api-access-wdnn9\") pod \"calico-kube-controllers-f44587b47-pwrq8\" (UID: \"3e108e34-a1be-4cee-ab9c-bbcc8960c586\") " pod="calico-system/calico-kube-controllers-f44587b47-pwrq8" May 27 17:48:53.761044 containerd[1720]: time="2025-05-27T17:48:53.761017118Z" level=error msg="Failed to destroy network for sandbox \"42ee6fb7803d12bc78497352729d3016a12cafa2d4d50be03c163e3ff6775a7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:53.763364 systemd[1]: run-netns-cni\x2ddcb3a7a0\x2d60e9\x2d36ad\x2d5cd6\x2d25803c0338c8.mount: Deactivated successfully. May 27 17:48:53.765993 containerd[1720]: time="2025-05-27T17:48:53.765933815Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k8l7g,Uid:b3a87a97-33eb-4b72-b6d1-a7277f7e95df,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"42ee6fb7803d12bc78497352729d3016a12cafa2d4d50be03c163e3ff6775a7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:53.766388 kubelet[3133]: E0527 17:48:53.766281 3133 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42ee6fb7803d12bc78497352729d3016a12cafa2d4d50be03c163e3ff6775a7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:53.766388 kubelet[3133]: E0527 17:48:53.766341 3133 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42ee6fb7803d12bc78497352729d3016a12cafa2d4d50be03c163e3ff6775a7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k8l7g" May 27 17:48:53.766388 kubelet[3133]: E0527 17:48:53.766361 3133 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42ee6fb7803d12bc78497352729d3016a12cafa2d4d50be03c163e3ff6775a7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k8l7g" May 27 17:48:53.766624 kubelet[3133]: E0527 17:48:53.766592 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k8l7g_calico-system(b3a87a97-33eb-4b72-b6d1-a7277f7e95df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k8l7g_calico-system(b3a87a97-33eb-4b72-b6d1-a7277f7e95df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42ee6fb7803d12bc78497352729d3016a12cafa2d4d50be03c163e3ff6775a7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k8l7g" podUID="b3a87a97-33eb-4b72-b6d1-a7277f7e95df" May 27 17:48:53.779838 containerd[1720]: time="2025-05-27T17:48:53.779776237Z" level=error msg="Failed to destroy network for sandbox \"79115d015f805cba0146313f8a215e1b80e01750acfe03c718bbf62f0ce3fb2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:53.781847 systemd[1]: run-netns-cni\x2dd6466cb8\x2d7a56\x2d3535\x2d939d\x2dfd91ef1e72b1.mount: Deactivated successfully. May 27 17:48:53.784029 containerd[1720]: time="2025-05-27T17:48:53.783947054Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g2n5z,Uid:f60ba011-2c39-4ab8-98b6-55b49dcacdc9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"79115d015f805cba0146313f8a215e1b80e01750acfe03c718bbf62f0ce3fb2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:53.784254 kubelet[3133]: E0527 17:48:53.784234 3133 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79115d015f805cba0146313f8a215e1b80e01750acfe03c718bbf62f0ce3fb2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:53.784387 kubelet[3133]: E0527 17:48:53.784365 3133 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79115d015f805cba0146313f8a215e1b80e01750acfe03c718bbf62f0ce3fb2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-g2n5z" May 27 17:48:53.784433 kubelet[3133]: E0527 17:48:53.784390 3133 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79115d015f805cba0146313f8a215e1b80e01750acfe03c718bbf62f0ce3fb2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-g2n5z" May 27 17:48:53.784561 kubelet[3133]: E0527 17:48:53.784442 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-g2n5z_kube-system(f60ba011-2c39-4ab8-98b6-55b49dcacdc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-g2n5z_kube-system(f60ba011-2c39-4ab8-98b6-55b49dcacdc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79115d015f805cba0146313f8a215e1b80e01750acfe03c718bbf62f0ce3fb2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-g2n5z" podUID="f60ba011-2c39-4ab8-98b6-55b49dcacdc9" May 27 17:48:53.785503 containerd[1720]: time="2025-05-27T17:48:53.785421783Z" level=error msg="Failed to destroy network for sandbox \"2eef77088502cf11db265e60642cfa26ef49ad194950236a1717cd2d1be84149\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:53.787025 systemd[1]: run-netns-cni\x2dab17ae55\x2d4d86\x2d8b45\x2dbfba\x2db8553abd5449.mount: Deactivated successfully. May 27 17:48:53.789704 containerd[1720]: time="2025-05-27T17:48:53.789655443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m4kcv,Uid:009c619a-fd70-49b4-823f-4da0d1b1b32b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eef77088502cf11db265e60642cfa26ef49ad194950236a1717cd2d1be84149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:53.789940 kubelet[3133]: E0527 17:48:53.789805 3133 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eef77088502cf11db265e60642cfa26ef49ad194950236a1717cd2d1be84149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:53.789940 kubelet[3133]: E0527 17:48:53.789857 3133 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eef77088502cf11db265e60642cfa26ef49ad194950236a1717cd2d1be84149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-m4kcv" May 27 17:48:53.789940 kubelet[3133]: E0527 17:48:53.789875 3133 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eef77088502cf11db265e60642cfa26ef49ad194950236a1717cd2d1be84149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-m4kcv" May 27 17:48:53.790171 kubelet[3133]: E0527 17:48:53.789934 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-m4kcv_kube-system(009c619a-fd70-49b4-823f-4da0d1b1b32b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-m4kcv_kube-system(009c619a-fd70-49b4-823f-4da0d1b1b32b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2eef77088502cf11db265e60642cfa26ef49ad194950236a1717cd2d1be84149\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-m4kcv" podUID="009c619a-fd70-49b4-823f-4da0d1b1b32b" May 27 17:48:53.999952 containerd[1720]: time="2025-05-27T17:48:53.999929306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5645b5c46d-mbgfx,Uid:d4781aaa-4b71-483f-ac78-f18b8dcfe307,Namespace:calico-apiserver,Attempt:0,}" May 27 17:48:54.012406 containerd[1720]: time="2025-05-27T17:48:54.012368165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-rdk7t,Uid:809f7cb6-fbee-4102-b430-229c080e87f0,Namespace:calico-system,Attempt:0,}" May 27 17:48:54.018099 containerd[1720]: time="2025-05-27T17:48:54.018080009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cd9c874f6-xm6dw,Uid:8f7bd5e7-15e4-4a39-ab15-f327a8700564,Namespace:calico-system,Attempt:0,}" May 27 17:48:54.026411 containerd[1720]: time="2025-05-27T17:48:54.026381271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f44587b47-pwrq8,Uid:3e108e34-a1be-4cee-ab9c-bbcc8960c586,Namespace:calico-system,Attempt:0,}" May 27 17:48:54.040849 containerd[1720]: time="2025-05-27T17:48:54.039271622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 17:48:54.093702 containerd[1720]: time="2025-05-27T17:48:54.093671905Z" level=error msg="Failed to destroy network for sandbox \"25a20e23b96f614e4abeee240123529fdb42dfe12c42a0c4b1e908a42aa72650\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:54.100194 containerd[1720]: time="2025-05-27T17:48:54.100156172Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5645b5c46d-mbgfx,Uid:d4781aaa-4b71-483f-ac78-f18b8dcfe307,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"25a20e23b96f614e4abeee240123529fdb42dfe12c42a0c4b1e908a42aa72650\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:54.100415 kubelet[3133]: E0527 17:48:54.100395 3133 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25a20e23b96f614e4abeee240123529fdb42dfe12c42a0c4b1e908a42aa72650\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:54.101250 kubelet[3133]: E0527 17:48:54.100688 3133 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25a20e23b96f614e4abeee240123529fdb42dfe12c42a0c4b1e908a42aa72650\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5645b5c46d-mbgfx" May 27 17:48:54.101250 kubelet[3133]: E0527 17:48:54.100709 3133 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25a20e23b96f614e4abeee240123529fdb42dfe12c42a0c4b1e908a42aa72650\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5645b5c46d-mbgfx" May 27 17:48:54.101250 kubelet[3133]: E0527 17:48:54.100748 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5645b5c46d-mbgfx_calico-apiserver(d4781aaa-4b71-483f-ac78-f18b8dcfe307)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5645b5c46d-mbgfx_calico-apiserver(d4781aaa-4b71-483f-ac78-f18b8dcfe307)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25a20e23b96f614e4abeee240123529fdb42dfe12c42a0c4b1e908a42aa72650\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5645b5c46d-mbgfx" podUID="d4781aaa-4b71-483f-ac78-f18b8dcfe307" May 27 17:48:54.116479 containerd[1720]: time="2025-05-27T17:48:54.116403694Z" level=error msg="Failed to destroy network for sandbox \"2b4e921d2703ae22a853ecca8dfb9709fe1af826f876842a8ecb63562fcd2ccd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:54.119029 containerd[1720]: time="2025-05-27T17:48:54.118994729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-rdk7t,Uid:809f7cb6-fbee-4102-b430-229c080e87f0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b4e921d2703ae22a853ecca8dfb9709fe1af826f876842a8ecb63562fcd2ccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:54.119255 kubelet[3133]: E0527 17:48:54.119237 3133 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b4e921d2703ae22a853ecca8dfb9709fe1af826f876842a8ecb63562fcd2ccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:54.119490 kubelet[3133]: E0527 17:48:54.119477 3133 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b4e921d2703ae22a853ecca8dfb9709fe1af826f876842a8ecb63562fcd2ccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-rdk7t" May 27 17:48:54.119575 kubelet[3133]: E0527 17:48:54.119564 3133 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b4e921d2703ae22a853ecca8dfb9709fe1af826f876842a8ecb63562fcd2ccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-rdk7t" May 27 17:48:54.119737 kubelet[3133]: E0527 17:48:54.119639 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-rdk7t_calico-system(809f7cb6-fbee-4102-b430-229c080e87f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-rdk7t_calico-system(809f7cb6-fbee-4102-b430-229c080e87f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b4e921d2703ae22a853ecca8dfb9709fe1af826f876842a8ecb63562fcd2ccd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-rdk7t" podUID="809f7cb6-fbee-4102-b430-229c080e87f0" May 27 17:48:54.124185 containerd[1720]: time="2025-05-27T17:48:54.124155823Z" level=error msg="Failed to destroy network for sandbox \"603f0cd668fa52489aeffc4d45836cec29f7a3376199c78d0c4eb376ac9756df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:54.124504 containerd[1720]: time="2025-05-27T17:48:54.124479353Z" level=error msg="Failed to destroy network for sandbox \"0715bf5140fe80681ff09092aa91305629eb9d1102f34517ff266411ae45bba6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:54.126426 containerd[1720]: time="2025-05-27T17:48:54.126359122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f44587b47-pwrq8,Uid:3e108e34-a1be-4cee-ab9c-bbcc8960c586,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"603f0cd668fa52489aeffc4d45836cec29f7a3376199c78d0c4eb376ac9756df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:54.127059 kubelet[3133]: E0527 17:48:54.126571 3133 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"603f0cd668fa52489aeffc4d45836cec29f7a3376199c78d0c4eb376ac9756df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:54.127059 kubelet[3133]: E0527 17:48:54.126601 3133 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"603f0cd668fa52489aeffc4d45836cec29f7a3376199c78d0c4eb376ac9756df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f44587b47-pwrq8" May 27 17:48:54.127059 kubelet[3133]: E0527 17:48:54.126621 3133 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"603f0cd668fa52489aeffc4d45836cec29f7a3376199c78d0c4eb376ac9756df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f44587b47-pwrq8" May 27 17:48:54.127218 kubelet[3133]: E0527 17:48:54.126655 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f44587b47-pwrq8_calico-system(3e108e34-a1be-4cee-ab9c-bbcc8960c586)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f44587b47-pwrq8_calico-system(3e108e34-a1be-4cee-ab9c-bbcc8960c586)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"603f0cd668fa52489aeffc4d45836cec29f7a3376199c78d0c4eb376ac9756df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f44587b47-pwrq8" podUID="3e108e34-a1be-4cee-ab9c-bbcc8960c586" May 27 17:48:54.128569 containerd[1720]: time="2025-05-27T17:48:54.128524569Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cd9c874f6-xm6dw,Uid:8f7bd5e7-15e4-4a39-ab15-f327a8700564,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0715bf5140fe80681ff09092aa91305629eb9d1102f34517ff266411ae45bba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:54.128688 kubelet[3133]: E0527 17:48:54.128668 3133 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0715bf5140fe80681ff09092aa91305629eb9d1102f34517ff266411ae45bba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:54.128737 kubelet[3133]: E0527 17:48:54.128706 3133 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0715bf5140fe80681ff09092aa91305629eb9d1102f34517ff266411ae45bba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7cd9c874f6-xm6dw" May 27 17:48:54.128737 kubelet[3133]: E0527 17:48:54.128730 3133 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0715bf5140fe80681ff09092aa91305629eb9d1102f34517ff266411ae45bba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7cd9c874f6-xm6dw" May 27 17:48:54.128805 kubelet[3133]: E0527 17:48:54.128765 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7cd9c874f6-xm6dw_calico-system(8f7bd5e7-15e4-4a39-ab15-f327a8700564)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7cd9c874f6-xm6dw_calico-system(8f7bd5e7-15e4-4a39-ab15-f327a8700564)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0715bf5140fe80681ff09092aa91305629eb9d1102f34517ff266411ae45bba6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7cd9c874f6-xm6dw" podUID="8f7bd5e7-15e4-4a39-ab15-f327a8700564" May 27 17:48:54.266053 containerd[1720]: time="2025-05-27T17:48:54.265972637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5645b5c46d-trkx5,Uid:fdd8d26e-edda-4b13-b4da-f6328fc5d832,Namespace:calico-apiserver,Attempt:0,}" May 27 17:48:54.303150 containerd[1720]: time="2025-05-27T17:48:54.303121685Z" level=error msg="Failed to destroy network for sandbox \"8301b6af42bcf14940076d24486a67fdc478bcef9076f8fef589c07621e7d91f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:54.305719 containerd[1720]: time="2025-05-27T17:48:54.305683390Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5645b5c46d-trkx5,Uid:fdd8d26e-edda-4b13-b4da-f6328fc5d832,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8301b6af42bcf14940076d24486a67fdc478bcef9076f8fef589c07621e7d91f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:54.305830 kubelet[3133]: E0527 17:48:54.305813 3133 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8301b6af42bcf14940076d24486a67fdc478bcef9076f8fef589c07621e7d91f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:48:54.305873 kubelet[3133]: E0527 17:48:54.305857 3133 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8301b6af42bcf14940076d24486a67fdc478bcef9076f8fef589c07621e7d91f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5645b5c46d-trkx5" May 27 17:48:54.305904 kubelet[3133]: E0527 17:48:54.305875 3133 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8301b6af42bcf14940076d24486a67fdc478bcef9076f8fef589c07621e7d91f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5645b5c46d-trkx5" May 27 17:48:54.305930 kubelet[3133]: E0527 17:48:54.305915 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5645b5c46d-trkx5_calico-apiserver(fdd8d26e-edda-4b13-b4da-f6328fc5d832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5645b5c46d-trkx5_calico-apiserver(fdd8d26e-edda-4b13-b4da-f6328fc5d832)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8301b6af42bcf14940076d24486a67fdc478bcef9076f8fef589c07621e7d91f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5645b5c46d-trkx5" podUID="fdd8d26e-edda-4b13-b4da-f6328fc5d832" May 27 17:48:59.956116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3668826014.mount: Deactivated successfully. May 27 17:48:59.986389 containerd[1720]: time="2025-05-27T17:48:59.986354879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:59.991796 containerd[1720]: time="2025-05-27T17:48:59.991774660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 17:48:59.994337 containerd[1720]: time="2025-05-27T17:48:59.994318366Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:59.996940 containerd[1720]: time="2025-05-27T17:48:59.996919578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:48:59.997191 containerd[1720]: time="2025-05-27T17:48:59.997171581Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 5.957869204s" May 27 17:48:59.997225 containerd[1720]: time="2025-05-27T17:48:59.997199661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 27 17:49:00.011775 containerd[1720]: time="2025-05-27T17:49:00.011748225Z" level=info msg="CreateContainer within sandbox \"6918e5c4cb7939c96720de27e313301bdb1ad0b113fca1ee7f44803514b836c8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 17:49:00.036997 containerd[1720]: time="2025-05-27T17:49:00.034151503Z" level=info msg="Container 96f844e49c4cc338215c6ea696b3a4a9532c8febc188a6350ad82ad54665ca30: CDI devices from CRI Config.CDIDevices: []" May 27 17:49:00.052581 containerd[1720]: time="2025-05-27T17:49:00.052558175Z" level=info msg="CreateContainer within sandbox \"6918e5c4cb7939c96720de27e313301bdb1ad0b113fca1ee7f44803514b836c8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"96f844e49c4cc338215c6ea696b3a4a9532c8febc188a6350ad82ad54665ca30\"" May 27 17:49:00.053048 containerd[1720]: time="2025-05-27T17:49:00.052863794Z" level=info msg="StartContainer for \"96f844e49c4cc338215c6ea696b3a4a9532c8febc188a6350ad82ad54665ca30\"" May 27 17:49:00.054266 containerd[1720]: time="2025-05-27T17:49:00.054236788Z" level=info msg="connecting to shim 96f844e49c4cc338215c6ea696b3a4a9532c8febc188a6350ad82ad54665ca30" address="unix:///run/containerd/s/1504f48beba164bd066888962cd0918e23265c062daa6a7ee5c4cfa71f4adb17" protocol=ttrpc version=3 May 27 17:49:00.068656 systemd[1]: Started cri-containerd-96f844e49c4cc338215c6ea696b3a4a9532c8febc188a6350ad82ad54665ca30.scope - libcontainer container 96f844e49c4cc338215c6ea696b3a4a9532c8febc188a6350ad82ad54665ca30. May 27 17:49:00.097240 containerd[1720]: time="2025-05-27T17:49:00.097222487Z" level=info msg="StartContainer for \"96f844e49c4cc338215c6ea696b3a4a9532c8febc188a6350ad82ad54665ca30\" returns successfully" May 27 17:49:00.258175 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 17:49:00.258346 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 17:49:00.361248 kubelet[3133]: I0527 17:49:00.361224 3133 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f7bd5e7-15e4-4a39-ab15-f327a8700564-whisker-ca-bundle\") pod \"8f7bd5e7-15e4-4a39-ab15-f327a8700564\" (UID: \"8f7bd5e7-15e4-4a39-ab15-f327a8700564\") " May 27 17:49:00.363134 kubelet[3133]: I0527 17:49:00.362888 3133 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jmj6\" (UniqueName: \"kubernetes.io/projected/8f7bd5e7-15e4-4a39-ab15-f327a8700564-kube-api-access-5jmj6\") pod \"8f7bd5e7-15e4-4a39-ab15-f327a8700564\" (UID: \"8f7bd5e7-15e4-4a39-ab15-f327a8700564\") " May 27 17:49:00.363134 kubelet[3133]: I0527 17:49:00.362920 3133 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8f7bd5e7-15e4-4a39-ab15-f327a8700564-whisker-backend-key-pair\") pod \"8f7bd5e7-15e4-4a39-ab15-f327a8700564\" (UID: \"8f7bd5e7-15e4-4a39-ab15-f327a8700564\") " May 27 17:49:00.363301 kubelet[3133]: I0527 17:49:00.362831 3133 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f7bd5e7-15e4-4a39-ab15-f327a8700564-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8f7bd5e7-15e4-4a39-ab15-f327a8700564" (UID: "8f7bd5e7-15e4-4a39-ab15-f327a8700564"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 17:49:00.366838 kubelet[3133]: I0527 17:49:00.366807 3133 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f7bd5e7-15e4-4a39-ab15-f327a8700564-kube-api-access-5jmj6" (OuterVolumeSpecName: "kube-api-access-5jmj6") pod "8f7bd5e7-15e4-4a39-ab15-f327a8700564" (UID: "8f7bd5e7-15e4-4a39-ab15-f327a8700564"). InnerVolumeSpecName "kube-api-access-5jmj6". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:49:00.367636 kubelet[3133]: I0527 17:49:00.367586 3133 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f7bd5e7-15e4-4a39-ab15-f327a8700564-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8f7bd5e7-15e4-4a39-ab15-f327a8700564" (UID: "8f7bd5e7-15e4-4a39-ab15-f327a8700564"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 17:49:00.463647 kubelet[3133]: I0527 17:49:00.463626 3133 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5jmj6\" (UniqueName: \"kubernetes.io/projected/8f7bd5e7-15e4-4a39-ab15-f327a8700564-kube-api-access-5jmj6\") on node \"ci-4344.0.0-a-92788821a5\" DevicePath \"\"" May 27 17:49:00.463647 kubelet[3133]: I0527 17:49:00.463650 3133 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8f7bd5e7-15e4-4a39-ab15-f327a8700564-whisker-backend-key-pair\") on node \"ci-4344.0.0-a-92788821a5\" DevicePath \"\"" May 27 17:49:00.463726 kubelet[3133]: I0527 17:49:00.463659 3133 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f7bd5e7-15e4-4a39-ab15-f327a8700564-whisker-ca-bundle\") on node \"ci-4344.0.0-a-92788821a5\" DevicePath \"\"" May 27 17:49:00.935181 systemd[1]: Removed slice kubepods-besteffort-pod8f7bd5e7_15e4_4a39_ab15_f327a8700564.slice - libcontainer container kubepods-besteffort-pod8f7bd5e7_15e4_4a39_ab15_f327a8700564.slice. May 27 17:49:00.956090 systemd[1]: var-lib-kubelet-pods-8f7bd5e7\x2d15e4\x2d4a39\x2dab15\x2df327a8700564-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5jmj6.mount: Deactivated successfully. May 27 17:49:00.956166 systemd[1]: var-lib-kubelet-pods-8f7bd5e7\x2d15e4\x2d4a39\x2dab15\x2df327a8700564-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 17:49:01.064543 kubelet[3133]: I0527 17:49:01.064443 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nm6q2" podStartSLOduration=1.701775349 podStartE2EDuration="18.064428873s" podCreationTimestamp="2025-05-27 17:48:43 +0000 UTC" firstStartedPulling="2025-05-27 17:48:43.635053687 +0000 UTC m=+20.788534000" lastFinishedPulling="2025-05-27 17:48:59.997707227 +0000 UTC m=+37.151187524" observedRunningTime="2025-05-27 17:49:01.063692447 +0000 UTC m=+38.217172756" watchObservedRunningTime="2025-05-27 17:49:01.064428873 +0000 UTC m=+38.217909183" May 27 17:49:01.142237 systemd[1]: Created slice kubepods-besteffort-pod0af188af_bed9_4f26_9fd6_cb97993cd253.slice - libcontainer container kubepods-besteffort-pod0af188af_bed9_4f26_9fd6_cb97993cd253.slice. May 27 17:49:01.166436 kubelet[3133]: I0527 17:49:01.166414 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0af188af-bed9-4f26-9fd6-cb97993cd253-whisker-backend-key-pair\") pod \"whisker-7bcdcfc6f8-4mnvt\" (UID: \"0af188af-bed9-4f26-9fd6-cb97993cd253\") " pod="calico-system/whisker-7bcdcfc6f8-4mnvt" May 27 17:49:01.166569 kubelet[3133]: I0527 17:49:01.166519 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljrzl\" (UniqueName: \"kubernetes.io/projected/0af188af-bed9-4f26-9fd6-cb97993cd253-kube-api-access-ljrzl\") pod \"whisker-7bcdcfc6f8-4mnvt\" (UID: \"0af188af-bed9-4f26-9fd6-cb97993cd253\") " pod="calico-system/whisker-7bcdcfc6f8-4mnvt" May 27 17:49:01.166569 kubelet[3133]: I0527 17:49:01.166546 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0af188af-bed9-4f26-9fd6-cb97993cd253-whisker-ca-bundle\") pod \"whisker-7bcdcfc6f8-4mnvt\" (UID: \"0af188af-bed9-4f26-9fd6-cb97993cd253\") " pod="calico-system/whisker-7bcdcfc6f8-4mnvt" May 27 17:49:01.446142 containerd[1720]: time="2025-05-27T17:49:01.446111644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bcdcfc6f8-4mnvt,Uid:0af188af-bed9-4f26-9fd6-cb97993cd253,Namespace:calico-system,Attempt:0,}" May 27 17:49:01.566365 systemd-networkd[1358]: cali550fa050d42: Link UP May 27 17:49:01.567870 systemd-networkd[1358]: cali550fa050d42: Gained carrier May 27 17:49:01.584630 containerd[1720]: 2025-05-27 17:49:01.469 [INFO][4250] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:49:01.584630 containerd[1720]: 2025-05-27 17:49:01.478 [INFO][4250] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--92788821a5-k8s-whisker--7bcdcfc6f8--4mnvt-eth0 whisker-7bcdcfc6f8- calico-system 0af188af-bed9-4f26-9fd6-cb97993cd253 895 0 2025-05-27 17:49:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7bcdcfc6f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344.0.0-a-92788821a5 whisker-7bcdcfc6f8-4mnvt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali550fa050d42 [] [] }} ContainerID="031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63" Namespace="calico-system" Pod="whisker-7bcdcfc6f8-4mnvt" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-whisker--7bcdcfc6f8--4mnvt-" May 27 17:49:01.584630 containerd[1720]: 2025-05-27 17:49:01.478 [INFO][4250] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63" Namespace="calico-system" Pod="whisker-7bcdcfc6f8-4mnvt" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-whisker--7bcdcfc6f8--4mnvt-eth0" May 27 17:49:01.584630 containerd[1720]: 2025-05-27 17:49:01.517 [INFO][4280] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63" HandleID="k8s-pod-network.031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63" Workload="ci--4344.0.0--a--92788821a5-k8s-whisker--7bcdcfc6f8--4mnvt-eth0" May 27 17:49:01.584812 containerd[1720]: 2025-05-27 17:49:01.518 [INFO][4280] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63" HandleID="k8s-pod-network.031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63" Workload="ci--4344.0.0--a--92788821a5-k8s-whisker--7bcdcfc6f8--4mnvt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.0.0-a-92788821a5", "pod":"whisker-7bcdcfc6f8-4mnvt", "timestamp":"2025-05-27 17:49:01.517943867 +0000 UTC"}, Hostname:"ci-4344.0.0-a-92788821a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:49:01.584812 containerd[1720]: 2025-05-27 17:49:01.518 [INFO][4280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:49:01.584812 containerd[1720]: 2025-05-27 17:49:01.518 [INFO][4280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:49:01.584812 containerd[1720]: 2025-05-27 17:49:01.518 [INFO][4280] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-92788821a5' May 27 17:49:01.584812 containerd[1720]: 2025-05-27 17:49:01.526 [INFO][4280] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63" host="ci-4344.0.0-a-92788821a5" May 27 17:49:01.584812 containerd[1720]: 2025-05-27 17:49:01.532 [INFO][4280] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-92788821a5" May 27 17:49:01.584812 containerd[1720]: 2025-05-27 17:49:01.536 [INFO][4280] ipam/ipam.go 511: Trying affinity for 192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:01.584812 containerd[1720]: 2025-05-27 17:49:01.538 [INFO][4280] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:01.584812 containerd[1720]: 2025-05-27 17:49:01.540 [INFO][4280] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:01.585789 containerd[1720]: 2025-05-27 17:49:01.540 [INFO][4280] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.0/26 handle="k8s-pod-network.031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63" host="ci-4344.0.0-a-92788821a5" May 27 17:49:01.585789 containerd[1720]: 2025-05-27 17:49:01.542 [INFO][4280] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63 May 27 17:49:01.585789 containerd[1720]: 2025-05-27 17:49:01.546 [INFO][4280] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.0/26 handle="k8s-pod-network.031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63" host="ci-4344.0.0-a-92788821a5" May 27 17:49:01.585789 containerd[1720]: 2025-05-27 17:49:01.554 [INFO][4280] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.1/26] block=192.168.78.0/26 handle="k8s-pod-network.031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63" host="ci-4344.0.0-a-92788821a5" May 27 17:49:01.585789 containerd[1720]: 2025-05-27 17:49:01.554 [INFO][4280] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.1/26] handle="k8s-pod-network.031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63" host="ci-4344.0.0-a-92788821a5" May 27 17:49:01.585789 containerd[1720]: 2025-05-27 17:49:01.554 [INFO][4280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:49:01.585789 containerd[1720]: 2025-05-27 17:49:01.554 [INFO][4280] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.1/26] IPv6=[] ContainerID="031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63" HandleID="k8s-pod-network.031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63" Workload="ci--4344.0.0--a--92788821a5-k8s-whisker--7bcdcfc6f8--4mnvt-eth0" May 27 17:49:01.585950 containerd[1720]: 2025-05-27 17:49:01.558 [INFO][4250] cni-plugin/k8s.go 418: Populated endpoint ContainerID="031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63" Namespace="calico-system" Pod="whisker-7bcdcfc6f8-4mnvt" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-whisker--7bcdcfc6f8--4mnvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--92788821a5-k8s-whisker--7bcdcfc6f8--4mnvt-eth0", GenerateName:"whisker-7bcdcfc6f8-", Namespace:"calico-system", SelfLink:"", UID:"0af188af-bed9-4f26-9fd6-cb97993cd253", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 49, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bcdcfc6f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-92788821a5", ContainerID:"", Pod:"whisker-7bcdcfc6f8-4mnvt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.78.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali550fa050d42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:49:01.585950 containerd[1720]: 2025-05-27 17:49:01.558 [INFO][4250] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.1/32] ContainerID="031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63" Namespace="calico-system" Pod="whisker-7bcdcfc6f8-4mnvt" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-whisker--7bcdcfc6f8--4mnvt-eth0" May 27 17:49:01.586039 containerd[1720]: 2025-05-27 17:49:01.558 [INFO][4250] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali550fa050d42 ContainerID="031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63" Namespace="calico-system" Pod="whisker-7bcdcfc6f8-4mnvt" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-whisker--7bcdcfc6f8--4mnvt-eth0" May 27 17:49:01.586039 containerd[1720]: 2025-05-27 17:49:01.569 [INFO][4250] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63" Namespace="calico-system" Pod="whisker-7bcdcfc6f8-4mnvt" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-whisker--7bcdcfc6f8--4mnvt-eth0" May 27 17:49:01.586080 containerd[1720]: 2025-05-27 17:49:01.569 [INFO][4250] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63" Namespace="calico-system" Pod="whisker-7bcdcfc6f8-4mnvt" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-whisker--7bcdcfc6f8--4mnvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--92788821a5-k8s-whisker--7bcdcfc6f8--4mnvt-eth0", GenerateName:"whisker-7bcdcfc6f8-", Namespace:"calico-system", SelfLink:"", UID:"0af188af-bed9-4f26-9fd6-cb97993cd253", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 49, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bcdcfc6f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-92788821a5", ContainerID:"031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63", Pod:"whisker-7bcdcfc6f8-4mnvt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.78.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali550fa050d42", MAC:"92:9b:43:b1:38:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:49:01.586131 containerd[1720]: 2025-05-27 17:49:01.581 [INFO][4250] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63" Namespace="calico-system" Pod="whisker-7bcdcfc6f8-4mnvt" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-whisker--7bcdcfc6f8--4mnvt-eth0" May 27 17:49:01.629283 containerd[1720]: time="2025-05-27T17:49:01.628855157Z" level=info msg="connecting to shim 031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63" address="unix:///run/containerd/s/3f77102fbabfc4018d3e4bb1a32060592279d18d5272040d1ba350f9e039baee" namespace=k8s.io protocol=ttrpc version=3 May 27 17:49:01.658899 systemd[1]: Started cri-containerd-031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63.scope - libcontainer container 031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63. May 27 17:49:01.728591 containerd[1720]: time="2025-05-27T17:49:01.728452291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bcdcfc6f8-4mnvt,Uid:0af188af-bed9-4f26-9fd6-cb97993cd253,Namespace:calico-system,Attempt:0,} returns sandbox id \"031e93793720b20253459bd1154ed7007417a832e2ff51607ceb7498cd003d63\"" May 27 17:49:01.731473 containerd[1720]: time="2025-05-27T17:49:01.731160478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:49:01.905484 containerd[1720]: time="2025-05-27T17:49:01.905455373Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:49:01.907604 containerd[1720]: time="2025-05-27T17:49:01.907577186Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:49:01.907646 containerd[1720]: time="2025-05-27T17:49:01.907638370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:49:01.907788 kubelet[3133]: E0527 17:49:01.907748 3133 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:49:01.908048 kubelet[3133]: E0527 17:49:01.907793 3133 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:49:01.908075 kubelet[3133]: E0527 17:49:01.907912 3133 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fe456007beb94267844a80ddd099b1f3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ljrzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bcdcfc6f8-4mnvt_calico-system(0af188af-bed9-4f26-9fd6-cb97993cd253): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:49:01.910116 containerd[1720]: time="2025-05-27T17:49:01.910084496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:49:02.082066 containerd[1720]: time="2025-05-27T17:49:02.081829361Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:49:02.083951 containerd[1720]: time="2025-05-27T17:49:02.083915906Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:49:02.084074 containerd[1720]: time="2025-05-27T17:49:02.083920880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:49:02.084124 kubelet[3133]: E0527 17:49:02.084101 3133 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:49:02.084177 kubelet[3133]: E0527 17:49:02.084134 3133 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:49:02.084312 kubelet[3133]: E0527 17:49:02.084257 3133 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljrzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bcdcfc6f8-4mnvt_calico-system(0af188af-bed9-4f26-9fd6-cb97993cd253): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:49:02.085476 kubelet[3133]: E0527 17:49:02.085446 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7bcdcfc6f8-4mnvt" podUID="0af188af-bed9-4f26-9fd6-cb97993cd253" May 27 17:49:02.932001 kubelet[3133]: I0527 17:49:02.931971 3133 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f7bd5e7-15e4-4a39-ab15-f327a8700564" path="/var/lib/kubelet/pods/8f7bd5e7-15e4-4a39-ab15-f327a8700564/volumes" May 27 17:49:03.056820 kubelet[3133]: E0527 17:49:03.056779 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7bcdcfc6f8-4mnvt" podUID="0af188af-bed9-4f26-9fd6-cb97993cd253" May 27 17:49:03.607665 systemd-networkd[1358]: cali550fa050d42: Gained IPv6LL May 27 17:49:05.930300 containerd[1720]: time="2025-05-27T17:49:05.929954317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k8l7g,Uid:b3a87a97-33eb-4b72-b6d1-a7277f7e95df,Namespace:calico-system,Attempt:0,}" May 27 17:49:05.930300 containerd[1720]: time="2025-05-27T17:49:05.930275334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f44587b47-pwrq8,Uid:3e108e34-a1be-4cee-ab9c-bbcc8960c586,Namespace:calico-system,Attempt:0,}" May 27 17:49:05.930851 containerd[1720]: time="2025-05-27T17:49:05.929954303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g2n5z,Uid:f60ba011-2c39-4ab8-98b6-55b49dcacdc9,Namespace:kube-system,Attempt:0,}" May 27 17:49:06.117164 systemd-networkd[1358]: cali1764e449954: Link UP May 27 17:49:06.119017 systemd-networkd[1358]: cali1764e449954: Gained carrier May 27 17:49:06.133332 containerd[1720]: 2025-05-27 17:49:05.984 [INFO][4489] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:49:06.133332 containerd[1720]: 2025-05-27 17:49:06.005 [INFO][4489] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--92788821a5-k8s-csi--node--driver--k8l7g-eth0 csi-node-driver- calico-system b3a87a97-33eb-4b72-b6d1-a7277f7e95df 704 0 2025-05-27 17:48:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344.0.0-a-92788821a5 csi-node-driver-k8l7g eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1764e449954 [] [] }} ContainerID="87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a" Namespace="calico-system" Pod="csi-node-driver-k8l7g" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-csi--node--driver--k8l7g-" May 27 17:49:06.133332 containerd[1720]: 2025-05-27 17:49:06.005 [INFO][4489] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a" Namespace="calico-system" Pod="csi-node-driver-k8l7g" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-csi--node--driver--k8l7g-eth0" May 27 17:49:06.133332 containerd[1720]: 2025-05-27 17:49:06.067 [INFO][4538] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a" HandleID="k8s-pod-network.87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a" Workload="ci--4344.0.0--a--92788821a5-k8s-csi--node--driver--k8l7g-eth0" May 27 17:49:06.133526 containerd[1720]: 2025-05-27 17:49:06.070 [INFO][4538] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a" HandleID="k8s-pod-network.87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a" Workload="ci--4344.0.0--a--92788821a5-k8s-csi--node--driver--k8l7g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e32e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.0.0-a-92788821a5", "pod":"csi-node-driver-k8l7g", "timestamp":"2025-05-27 17:49:06.066443873 +0000 UTC"}, Hostname:"ci-4344.0.0-a-92788821a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:49:06.133526 containerd[1720]: 2025-05-27 17:49:06.070 [INFO][4538] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:49:06.133526 containerd[1720]: 2025-05-27 17:49:06.070 [INFO][4538] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:49:06.133526 containerd[1720]: 2025-05-27 17:49:06.070 [INFO][4538] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-92788821a5' May 27 17:49:06.133526 containerd[1720]: 2025-05-27 17:49:06.080 [INFO][4538] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a" host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.133526 containerd[1720]: 2025-05-27 17:49:06.086 [INFO][4538] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.133526 containerd[1720]: 2025-05-27 17:49:06.093 [INFO][4538] ipam/ipam.go 511: Trying affinity for 192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.133526 containerd[1720]: 2025-05-27 17:49:06.096 [INFO][4538] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.133526 containerd[1720]: 2025-05-27 17:49:06.098 [INFO][4538] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.133773 containerd[1720]: 2025-05-27 17:49:06.098 [INFO][4538] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.0/26 handle="k8s-pod-network.87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a" host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.133773 containerd[1720]: 2025-05-27 17:49:06.100 [INFO][4538] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a May 27 17:49:06.133773 containerd[1720]: 2025-05-27 17:49:06.106 [INFO][4538] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.0/26 handle="k8s-pod-network.87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a" host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.133773 containerd[1720]: 2025-05-27 17:49:06.110 [INFO][4538] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.2/26] block=192.168.78.0/26 handle="k8s-pod-network.87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a" host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.133773 containerd[1720]: 2025-05-27 17:49:06.110 [INFO][4538] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.2/26] handle="k8s-pod-network.87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a" host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.133773 containerd[1720]: 2025-05-27 17:49:06.110 [INFO][4538] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:49:06.133773 containerd[1720]: 2025-05-27 17:49:06.110 [INFO][4538] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.2/26] IPv6=[] ContainerID="87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a" HandleID="k8s-pod-network.87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a" Workload="ci--4344.0.0--a--92788821a5-k8s-csi--node--driver--k8l7g-eth0" May 27 17:49:06.133942 containerd[1720]: 2025-05-27 17:49:06.113 [INFO][4489] cni-plugin/k8s.go 418: Populated endpoint ContainerID="87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a" Namespace="calico-system" Pod="csi-node-driver-k8l7g" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-csi--node--driver--k8l7g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--92788821a5-k8s-csi--node--driver--k8l7g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b3a87a97-33eb-4b72-b6d1-a7277f7e95df", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 48, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-92788821a5", ContainerID:"", Pod:"csi-node-driver-k8l7g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.78.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1764e449954", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:49:06.134022 containerd[1720]: 2025-05-27 17:49:06.113 [INFO][4489] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.2/32] ContainerID="87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a" Namespace="calico-system" Pod="csi-node-driver-k8l7g" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-csi--node--driver--k8l7g-eth0" May 27 17:49:06.134022 containerd[1720]: 2025-05-27 17:49:06.113 [INFO][4489] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1764e449954 ContainerID="87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a" Namespace="calico-system" Pod="csi-node-driver-k8l7g" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-csi--node--driver--k8l7g-eth0" May 27 17:49:06.134022 containerd[1720]: 2025-05-27 17:49:06.120 [INFO][4489] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a" Namespace="calico-system" Pod="csi-node-driver-k8l7g" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-csi--node--driver--k8l7g-eth0" May 27 17:49:06.134113 containerd[1720]: 2025-05-27 17:49:06.121 [INFO][4489] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a" Namespace="calico-system" Pod="csi-node-driver-k8l7g" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-csi--node--driver--k8l7g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--92788821a5-k8s-csi--node--driver--k8l7g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b3a87a97-33eb-4b72-b6d1-a7277f7e95df", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 48, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-92788821a5", ContainerID:"87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a", Pod:"csi-node-driver-k8l7g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.78.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1764e449954", MAC:"8a:2e:15:72:d1:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:49:06.134183 containerd[1720]: 2025-05-27 17:49:06.131 [INFO][4489] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a" Namespace="calico-system" Pod="csi-node-driver-k8l7g" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-csi--node--driver--k8l7g-eth0" May 27 17:49:06.171963 containerd[1720]: time="2025-05-27T17:49:06.171899320Z" level=info msg="connecting to shim 87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a" address="unix:///run/containerd/s/dae59ba2a310a2c499ef3d449eb8d1a4b3cad3b3ccb8ea84daf3543afe969fd5" namespace=k8s.io protocol=ttrpc version=3 May 27 17:49:06.195773 systemd[1]: Started cri-containerd-87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a.scope - libcontainer container 87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a. May 27 17:49:06.218986 systemd-networkd[1358]: calie87141114b8: Link UP May 27 17:49:06.219149 systemd-networkd[1358]: calie87141114b8: Gained carrier May 27 17:49:06.221177 containerd[1720]: time="2025-05-27T17:49:06.220780951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k8l7g,Uid:b3a87a97-33eb-4b72-b6d1-a7277f7e95df,Namespace:calico-system,Attempt:0,} returns sandbox id \"87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a\"" May 27 17:49:06.223469 containerd[1720]: time="2025-05-27T17:49:06.223057349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 17:49:06.233144 containerd[1720]: 2025-05-27 17:49:05.992 [INFO][4499] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:49:06.233144 containerd[1720]: 2025-05-27 17:49:06.014 [INFO][4499] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--92788821a5-k8s-calico--kube--controllers--f44587b47--pwrq8-eth0 calico-kube-controllers-f44587b47- calico-system 3e108e34-a1be-4cee-ab9c-bbcc8960c586 825 0 2025-05-27 17:48:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f44587b47 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344.0.0-a-92788821a5 calico-kube-controllers-f44587b47-pwrq8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie87141114b8 [] [] }} ContainerID="c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278" Namespace="calico-system" Pod="calico-kube-controllers-f44587b47-pwrq8" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--kube--controllers--f44587b47--pwrq8-" May 27 17:49:06.233144 containerd[1720]: 2025-05-27 17:49:06.015 [INFO][4499] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278" Namespace="calico-system" Pod="calico-kube-controllers-f44587b47-pwrq8" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--kube--controllers--f44587b47--pwrq8-eth0" May 27 17:49:06.233144 containerd[1720]: 2025-05-27 17:49:06.081 [INFO][4544] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278" HandleID="k8s-pod-network.c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278" Workload="ci--4344.0.0--a--92788821a5-k8s-calico--kube--controllers--f44587b47--pwrq8-eth0" May 27 17:49:06.233342 containerd[1720]: 2025-05-27 17:49:06.081 [INFO][4544] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278" HandleID="k8s-pod-network.c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278" Workload="ci--4344.0.0--a--92788821a5-k8s-calico--kube--controllers--f44587b47--pwrq8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d96c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.0.0-a-92788821a5", "pod":"calico-kube-controllers-f44587b47-pwrq8", "timestamp":"2025-05-27 17:49:06.074518228 +0000 UTC"}, Hostname:"ci-4344.0.0-a-92788821a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:49:06.233342 containerd[1720]: 2025-05-27 17:49:06.081 [INFO][4544] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:49:06.233342 containerd[1720]: 2025-05-27 17:49:06.111 [INFO][4544] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:49:06.233342 containerd[1720]: 2025-05-27 17:49:06.111 [INFO][4544] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-92788821a5' May 27 17:49:06.233342 containerd[1720]: 2025-05-27 17:49:06.180 [INFO][4544] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278" host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.233342 containerd[1720]: 2025-05-27 17:49:06.186 [INFO][4544] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.233342 containerd[1720]: 2025-05-27 17:49:06.192 [INFO][4544] ipam/ipam.go 511: Trying affinity for 192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.233342 containerd[1720]: 2025-05-27 17:49:06.193 [INFO][4544] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.233342 containerd[1720]: 2025-05-27 17:49:06.195 [INFO][4544] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.233663 containerd[1720]: 2025-05-27 17:49:06.195 [INFO][4544] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.0/26 handle="k8s-pod-network.c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278" host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.233663 containerd[1720]: 2025-05-27 17:49:06.196 [INFO][4544] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278 May 27 17:49:06.233663 containerd[1720]: 2025-05-27 17:49:06.202 [INFO][4544] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.0/26 handle="k8s-pod-network.c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278" host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.233663 containerd[1720]: 2025-05-27 17:49:06.210 [INFO][4544] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.3/26] block=192.168.78.0/26 handle="k8s-pod-network.c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278" host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.233663 containerd[1720]: 2025-05-27 17:49:06.210 [INFO][4544] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.3/26] handle="k8s-pod-network.c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278" host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.233663 containerd[1720]: 2025-05-27 17:49:06.210 [INFO][4544] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:49:06.233663 containerd[1720]: 2025-05-27 17:49:06.210 [INFO][4544] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.3/26] IPv6=[] ContainerID="c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278" HandleID="k8s-pod-network.c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278" Workload="ci--4344.0.0--a--92788821a5-k8s-calico--kube--controllers--f44587b47--pwrq8-eth0" May 27 17:49:06.233843 containerd[1720]: 2025-05-27 17:49:06.216 [INFO][4499] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278" Namespace="calico-system" Pod="calico-kube-controllers-f44587b47-pwrq8" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--kube--controllers--f44587b47--pwrq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--92788821a5-k8s-calico--kube--controllers--f44587b47--pwrq8-eth0", GenerateName:"calico-kube-controllers-f44587b47-", Namespace:"calico-system", SelfLink:"", UID:"3e108e34-a1be-4cee-ab9c-bbcc8960c586", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 48, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f44587b47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-92788821a5", ContainerID:"", Pod:"calico-kube-controllers-f44587b47-pwrq8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.78.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie87141114b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:49:06.233917 containerd[1720]: 2025-05-27 17:49:06.216 [INFO][4499] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.3/32] ContainerID="c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278" Namespace="calico-system" Pod="calico-kube-controllers-f44587b47-pwrq8" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--kube--controllers--f44587b47--pwrq8-eth0" May 27 17:49:06.233917 containerd[1720]: 2025-05-27 17:49:06.216 [INFO][4499] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie87141114b8 ContainerID="c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278" Namespace="calico-system" Pod="calico-kube-controllers-f44587b47-pwrq8" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--kube--controllers--f44587b47--pwrq8-eth0" May 27 17:49:06.233917 containerd[1720]: 2025-05-27 17:49:06.218 [INFO][4499] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278" Namespace="calico-system" Pod="calico-kube-controllers-f44587b47-pwrq8" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--kube--controllers--f44587b47--pwrq8-eth0" May 27 17:49:06.233990 containerd[1720]: 2025-05-27 17:49:06.219 [INFO][4499] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278" Namespace="calico-system" Pod="calico-kube-controllers-f44587b47-pwrq8" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--kube--controllers--f44587b47--pwrq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--92788821a5-k8s-calico--kube--controllers--f44587b47--pwrq8-eth0", GenerateName:"calico-kube-controllers-f44587b47-", Namespace:"calico-system", SelfLink:"", UID:"3e108e34-a1be-4cee-ab9c-bbcc8960c586", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 48, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f44587b47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-92788821a5", ContainerID:"c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278", Pod:"calico-kube-controllers-f44587b47-pwrq8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.78.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie87141114b8", MAC:"a6:71:16:5f:ae:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:49:06.234047 containerd[1720]: 2025-05-27 17:49:06.232 [INFO][4499] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278" Namespace="calico-system" Pod="calico-kube-controllers-f44587b47-pwrq8" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--kube--controllers--f44587b47--pwrq8-eth0" May 27 17:49:06.259195 containerd[1720]: time="2025-05-27T17:49:06.258810235Z" level=info msg="connecting to shim c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278" address="unix:///run/containerd/s/97237c3ae273c9fc5eefffdb22d9f8a4027c5955cfa3d573b2e99783c59dc0dc" namespace=k8s.io protocol=ttrpc version=3 May 27 17:49:06.274640 systemd[1]: Started cri-containerd-c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278.scope - libcontainer container c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278. May 27 17:49:06.318155 systemd-networkd[1358]: cali739b4ce6449: Link UP May 27 17:49:06.318995 systemd-networkd[1358]: cali739b4ce6449: Gained carrier May 27 17:49:06.321435 containerd[1720]: time="2025-05-27T17:49:06.321357026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f44587b47-pwrq8,Uid:3e108e34-a1be-4cee-ab9c-bbcc8960c586,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278\"" May 27 17:49:06.330269 containerd[1720]: 2025-05-27 17:49:06.003 [INFO][4510] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:49:06.330269 containerd[1720]: 2025-05-27 17:49:06.017 [INFO][4510] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--g2n5z-eth0 coredns-674b8bbfcf- kube-system f60ba011-2c39-4ab8-98b6-55b49dcacdc9 815 0 2025-05-27 17:48:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.0.0-a-92788821a5 coredns-674b8bbfcf-g2n5z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali739b4ce6449 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b" Namespace="kube-system" Pod="coredns-674b8bbfcf-g2n5z" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--g2n5z-" May 27 17:49:06.330269 containerd[1720]: 2025-05-27 17:49:06.017 [INFO][4510] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b" Namespace="kube-system" Pod="coredns-674b8bbfcf-g2n5z" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--g2n5z-eth0" May 27 17:49:06.330269 containerd[1720]: 2025-05-27 17:49:06.083 [INFO][4546] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b" HandleID="k8s-pod-network.79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b" Workload="ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--g2n5z-eth0" May 27 17:49:06.330513 containerd[1720]: 2025-05-27 17:49:06.083 [INFO][4546] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b" HandleID="k8s-pod-network.79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b" Workload="ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--g2n5z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.0.0-a-92788821a5", "pod":"coredns-674b8bbfcf-g2n5z", "timestamp":"2025-05-27 17:49:06.082844418 +0000 UTC"}, Hostname:"ci-4344.0.0-a-92788821a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:49:06.330513 containerd[1720]: 2025-05-27 17:49:06.083 [INFO][4546] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:49:06.330513 containerd[1720]: 2025-05-27 17:49:06.211 [INFO][4546] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:49:06.330513 containerd[1720]: 2025-05-27 17:49:06.211 [INFO][4546] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-92788821a5' May 27 17:49:06.330513 containerd[1720]: 2025-05-27 17:49:06.280 [INFO][4546] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b" host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.330513 containerd[1720]: 2025-05-27 17:49:06.287 [INFO][4546] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.330513 containerd[1720]: 2025-05-27 17:49:06.296 [INFO][4546] ipam/ipam.go 511: Trying affinity for 192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.330513 containerd[1720]: 2025-05-27 17:49:06.298 [INFO][4546] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.330513 containerd[1720]: 2025-05-27 17:49:06.300 [INFO][4546] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.330943 containerd[1720]: 2025-05-27 17:49:06.300 [INFO][4546] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.0/26 handle="k8s-pod-network.79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b" host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.330943 containerd[1720]: 2025-05-27 17:49:06.301 [INFO][4546] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b May 27 17:49:06.330943 containerd[1720]: 2025-05-27 17:49:06.306 [INFO][4546] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.0/26 handle="k8s-pod-network.79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b" host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.330943 containerd[1720]: 2025-05-27 17:49:06.314 [INFO][4546] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.4/26] block=192.168.78.0/26 handle="k8s-pod-network.79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b" host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.330943 containerd[1720]: 2025-05-27 17:49:06.314 [INFO][4546] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.4/26] handle="k8s-pod-network.79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b" host="ci-4344.0.0-a-92788821a5" May 27 17:49:06.330943 containerd[1720]: 2025-05-27 17:49:06.314 [INFO][4546] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:49:06.330943 containerd[1720]: 2025-05-27 17:49:06.314 [INFO][4546] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.4/26] IPv6=[] ContainerID="79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b" HandleID="k8s-pod-network.79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b" Workload="ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--g2n5z-eth0" May 27 17:49:06.331091 containerd[1720]: 2025-05-27 17:49:06.316 [INFO][4510] cni-plugin/k8s.go 418: Populated endpoint ContainerID="79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b" Namespace="kube-system" Pod="coredns-674b8bbfcf-g2n5z" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--g2n5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--g2n5z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f60ba011-2c39-4ab8-98b6-55b49dcacdc9", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-92788821a5", ContainerID:"", Pod:"coredns-674b8bbfcf-g2n5z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.78.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali739b4ce6449", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:49:06.331091 containerd[1720]: 2025-05-27 17:49:06.316 [INFO][4510] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.4/32] ContainerID="79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b" Namespace="kube-system" Pod="coredns-674b8bbfcf-g2n5z" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--g2n5z-eth0" May 27 17:49:06.331091 containerd[1720]: 2025-05-27 17:49:06.316 [INFO][4510] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali739b4ce6449 ContainerID="79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b" Namespace="kube-system" Pod="coredns-674b8bbfcf-g2n5z" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--g2n5z-eth0" May 27 17:49:06.331091 containerd[1720]: 2025-05-27 17:49:06.318 [INFO][4510] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b" Namespace="kube-system" Pod="coredns-674b8bbfcf-g2n5z" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--g2n5z-eth0" May 27 17:49:06.331091 containerd[1720]: 2025-05-27 17:49:06.318 [INFO][4510] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b" Namespace="kube-system" Pod="coredns-674b8bbfcf-g2n5z" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--g2n5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--g2n5z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f60ba011-2c39-4ab8-98b6-55b49dcacdc9", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-92788821a5", ContainerID:"79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b", Pod:"coredns-674b8bbfcf-g2n5z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.78.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali739b4ce6449", MAC:"22:1e:7b:0d:2c:0e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:49:06.331091 containerd[1720]: 2025-05-27 17:49:06.328 [INFO][4510] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b" Namespace="kube-system" Pod="coredns-674b8bbfcf-g2n5z" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--g2n5z-eth0" May 27 17:49:06.359496 containerd[1720]: time="2025-05-27T17:49:06.359429221Z" level=info msg="connecting to shim 79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b" address="unix:///run/containerd/s/5bc5e672d88e2861f968ef77e71e2bfb9c86c80d6e4c9fe6c211ebbbade8bec0" namespace=k8s.io protocol=ttrpc version=3 May 27 17:49:06.375648 systemd[1]: Started cri-containerd-79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b.scope - libcontainer container 79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b. May 27 17:49:06.407581 containerd[1720]: time="2025-05-27T17:49:06.407564174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g2n5z,Uid:f60ba011-2c39-4ab8-98b6-55b49dcacdc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b\"" May 27 17:49:06.413286 containerd[1720]: time="2025-05-27T17:49:06.413266422Z" level=info msg="CreateContainer within sandbox \"79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:49:06.433579 containerd[1720]: time="2025-05-27T17:49:06.433455659Z" level=info msg="Container fefe5e4a0b1326bb065f0a38e44cd2c9d75656ffe3a41ec3c3c9bedb0a56c702: CDI devices from CRI Config.CDIDevices: []" May 27 17:49:06.444796 containerd[1720]: time="2025-05-27T17:49:06.444775558Z" level=info msg="CreateContainer within sandbox \"79da88a02e3703422305a629b1e3b523e3cc88ac5dbc0da10ddd04ab9c70fb6b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fefe5e4a0b1326bb065f0a38e44cd2c9d75656ffe3a41ec3c3c9bedb0a56c702\"" May 27 17:49:06.445073 containerd[1720]: time="2025-05-27T17:49:06.445058798Z" level=info msg="StartContainer for \"fefe5e4a0b1326bb065f0a38e44cd2c9d75656ffe3a41ec3c3c9bedb0a56c702\"" May 27 17:49:06.445865 containerd[1720]: time="2025-05-27T17:49:06.445810601Z" level=info msg="connecting to shim fefe5e4a0b1326bb065f0a38e44cd2c9d75656ffe3a41ec3c3c9bedb0a56c702" address="unix:///run/containerd/s/5bc5e672d88e2861f968ef77e71e2bfb9c86c80d6e4c9fe6c211ebbbade8bec0" protocol=ttrpc version=3 May 27 17:49:06.460646 systemd[1]: Started cri-containerd-fefe5e4a0b1326bb065f0a38e44cd2c9d75656ffe3a41ec3c3c9bedb0a56c702.scope - libcontainer container fefe5e4a0b1326bb065f0a38e44cd2c9d75656ffe3a41ec3c3c9bedb0a56c702. May 27 17:49:06.482313 containerd[1720]: time="2025-05-27T17:49:06.482292613Z" level=info msg="StartContainer for \"fefe5e4a0b1326bb065f0a38e44cd2c9d75656ffe3a41ec3c3c9bedb0a56c702\" returns successfully" May 27 17:49:06.930562 containerd[1720]: time="2025-05-27T17:49:06.930353716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m4kcv,Uid:009c619a-fd70-49b4-823f-4da0d1b1b32b,Namespace:kube-system,Attempt:0,}" May 27 17:49:06.930874 containerd[1720]: time="2025-05-27T17:49:06.930587702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5645b5c46d-mbgfx,Uid:d4781aaa-4b71-483f-ac78-f18b8dcfe307,Namespace:calico-apiserver,Attempt:0,}" May 27 17:49:07.037663 systemd-networkd[1358]: calie4e11dbaa16: Link UP May 27 17:49:07.037840 systemd-networkd[1358]: calie4e11dbaa16: Gained carrier May 27 17:49:07.049147 containerd[1720]: 2025-05-27 17:49:06.967 [INFO][4760] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:49:07.049147 containerd[1720]: 2025-05-27 17:49:06.979 [INFO][4760] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--m4kcv-eth0 coredns-674b8bbfcf- kube-system 009c619a-fd70-49b4-823f-4da0d1b1b32b 816 0 2025-05-27 17:48:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.0.0-a-92788821a5 coredns-674b8bbfcf-m4kcv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie4e11dbaa16 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4kcv" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--m4kcv-" May 27 17:49:07.049147 containerd[1720]: 2025-05-27 17:49:06.979 [INFO][4760] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4kcv" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--m4kcv-eth0" May 27 17:49:07.049147 containerd[1720]: 2025-05-27 17:49:07.005 [INFO][4783] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912" HandleID="k8s-pod-network.ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912" Workload="ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--m4kcv-eth0" May 27 17:49:07.049147 containerd[1720]: 2025-05-27 17:49:07.005 [INFO][4783] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912" HandleID="k8s-pod-network.ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912" Workload="ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--m4kcv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000233180), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.0.0-a-92788821a5", "pod":"coredns-674b8bbfcf-m4kcv", "timestamp":"2025-05-27 17:49:07.00545688 +0000 UTC"}, Hostname:"ci-4344.0.0-a-92788821a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:49:07.049147 containerd[1720]: 2025-05-27 17:49:07.005 [INFO][4783] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:49:07.049147 containerd[1720]: 2025-05-27 17:49:07.005 [INFO][4783] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:49:07.049147 containerd[1720]: 2025-05-27 17:49:07.005 [INFO][4783] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-92788821a5' May 27 17:49:07.049147 containerd[1720]: 2025-05-27 17:49:07.011 [INFO][4783] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912" host="ci-4344.0.0-a-92788821a5" May 27 17:49:07.049147 containerd[1720]: 2025-05-27 17:49:07.014 [INFO][4783] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-92788821a5" May 27 17:49:07.049147 containerd[1720]: 2025-05-27 17:49:07.017 [INFO][4783] ipam/ipam.go 511: Trying affinity for 192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:07.049147 containerd[1720]: 2025-05-27 17:49:07.018 [INFO][4783] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:07.049147 containerd[1720]: 2025-05-27 17:49:07.019 [INFO][4783] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:07.049147 containerd[1720]: 2025-05-27 17:49:07.019 [INFO][4783] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.0/26 handle="k8s-pod-network.ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912" host="ci-4344.0.0-a-92788821a5" May 27 17:49:07.049147 containerd[1720]: 2025-05-27 17:49:07.020 [INFO][4783] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912 May 27 17:49:07.049147 containerd[1720]: 2025-05-27 17:49:07.023 [INFO][4783] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.0/26 handle="k8s-pod-network.ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912" host="ci-4344.0.0-a-92788821a5" May 27 17:49:07.049147 containerd[1720]: 2025-05-27 17:49:07.030 [INFO][4783] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.5/26] block=192.168.78.0/26 handle="k8s-pod-network.ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912" host="ci-4344.0.0-a-92788821a5" May 27 17:49:07.049147 containerd[1720]: 2025-05-27 17:49:07.030 [INFO][4783] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.5/26] handle="k8s-pod-network.ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912" host="ci-4344.0.0-a-92788821a5" May 27 17:49:07.049147 containerd[1720]: 2025-05-27 17:49:07.030 [INFO][4783] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:49:07.049147 containerd[1720]: 2025-05-27 17:49:07.030 [INFO][4783] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.5/26] IPv6=[] ContainerID="ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912" HandleID="k8s-pod-network.ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912" Workload="ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--m4kcv-eth0" May 27 17:49:07.049750 containerd[1720]: 2025-05-27 17:49:07.031 [INFO][4760] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4kcv" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--m4kcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--m4kcv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"009c619a-fd70-49b4-823f-4da0d1b1b32b", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-92788821a5", ContainerID:"", Pod:"coredns-674b8bbfcf-m4kcv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.78.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4e11dbaa16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:49:07.049750 containerd[1720]: 2025-05-27 17:49:07.031 [INFO][4760] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.5/32] ContainerID="ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4kcv" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--m4kcv-eth0" May 27 17:49:07.049750 containerd[1720]: 2025-05-27 17:49:07.031 [INFO][4760] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4e11dbaa16 ContainerID="ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4kcv" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--m4kcv-eth0" May 27 17:49:07.049750 containerd[1720]: 2025-05-27 17:49:07.037 [INFO][4760] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4kcv" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--m4kcv-eth0" May 27 17:49:07.049750 containerd[1720]: 2025-05-27 17:49:07.037 [INFO][4760] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4kcv" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--m4kcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--m4kcv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"009c619a-fd70-49b4-823f-4da0d1b1b32b", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-92788821a5", ContainerID:"ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912", Pod:"coredns-674b8bbfcf-m4kcv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.78.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4e11dbaa16", MAC:"2e:ff:7c:7f:8c:16", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:49:07.049750 containerd[1720]: 2025-05-27 17:49:07.047 [INFO][4760] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4kcv" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-coredns--674b8bbfcf--m4kcv-eth0" May 27 17:49:07.073235 kubelet[3133]: I0527 17:49:07.073184 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-g2n5z" podStartSLOduration=38.073168987 podStartE2EDuration="38.073168987s" podCreationTimestamp="2025-05-27 17:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:49:07.073055223 +0000 UTC m=+44.226535536" watchObservedRunningTime="2025-05-27 17:49:07.073168987 +0000 UTC m=+44.226649298" May 27 17:49:07.096789 containerd[1720]: time="2025-05-27T17:49:07.096711170Z" level=info msg="connecting to shim ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912" address="unix:///run/containerd/s/56b206f4b99e1ce9bf0ca46181dea5f593b6cf92cbda60aa9666235cdab73d8c" namespace=k8s.io protocol=ttrpc version=3 May 27 17:49:07.127695 systemd[1]: Started cri-containerd-ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912.scope - libcontainer container ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912. May 27 17:49:07.182193 systemd-networkd[1358]: cali7b234e6140b: Link UP May 27 17:49:07.183962 systemd-networkd[1358]: cali7b234e6140b: Gained carrier May 27 17:49:07.207046 containerd[1720]: time="2025-05-27T17:49:07.206978291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m4kcv,Uid:009c619a-fd70-49b4-823f-4da0d1b1b32b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912\"" May 27 17:49:07.207176 containerd[1720]: 2025-05-27 17:49:06.976 [INFO][4768] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:49:07.207176 containerd[1720]: 2025-05-27 17:49:06.983 [INFO][4768] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--mbgfx-eth0 calico-apiserver-5645b5c46d- calico-apiserver d4781aaa-4b71-483f-ac78-f18b8dcfe307 822 0 2025-05-27 17:48:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5645b5c46d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.0.0-a-92788821a5 calico-apiserver-5645b5c46d-mbgfx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7b234e6140b [] [] }} ContainerID="feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667" Namespace="calico-apiserver" Pod="calico-apiserver-5645b5c46d-mbgfx" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--mbgfx-" May 27 17:49:07.207176 containerd[1720]: 2025-05-27 17:49:06.983 [INFO][4768] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667" Namespace="calico-apiserver" Pod="calico-apiserver-5645b5c46d-mbgfx" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--mbgfx-eth0" May 27 17:49:07.207176 containerd[1720]: 2025-05-27 17:49:07.007 [INFO][4788] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667" HandleID="k8s-pod-network.feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667" Workload="ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--mbgfx-eth0" May 27 17:49:07.207176 containerd[1720]: 2025-05-27 17:49:07.008 [INFO][4788] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667" HandleID="k8s-pod-network.feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667" Workload="ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--mbgfx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9040), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.0.0-a-92788821a5", "pod":"calico-apiserver-5645b5c46d-mbgfx", "timestamp":"2025-05-27 17:49:07.007852889 +0000 UTC"}, Hostname:"ci-4344.0.0-a-92788821a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:49:07.207176 containerd[1720]: 2025-05-27 17:49:07.008 [INFO][4788] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:49:07.207176 containerd[1720]: 2025-05-27 17:49:07.030 [INFO][4788] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:49:07.207176 containerd[1720]: 2025-05-27 17:49:07.030 [INFO][4788] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-92788821a5' May 27 17:49:07.207176 containerd[1720]: 2025-05-27 17:49:07.118 [INFO][4788] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667" host="ci-4344.0.0-a-92788821a5" May 27 17:49:07.207176 containerd[1720]: 2025-05-27 17:49:07.129 [INFO][4788] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-92788821a5" May 27 17:49:07.207176 containerd[1720]: 2025-05-27 17:49:07.138 [INFO][4788] ipam/ipam.go 511: Trying affinity for 192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:07.207176 containerd[1720]: 2025-05-27 17:49:07.140 [INFO][4788] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:07.207176 containerd[1720]: 2025-05-27 17:49:07.151 [INFO][4788] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:07.207176 containerd[1720]: 2025-05-27 17:49:07.151 [INFO][4788] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.0/26 handle="k8s-pod-network.feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667" host="ci-4344.0.0-a-92788821a5" May 27 17:49:07.207176 containerd[1720]: 2025-05-27 17:49:07.152 [INFO][4788] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667 May 27 17:49:07.207176 containerd[1720]: 2025-05-27 17:49:07.166 [INFO][4788] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.0/26 handle="k8s-pod-network.feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667" host="ci-4344.0.0-a-92788821a5" May 27 17:49:07.207176 containerd[1720]: 2025-05-27 17:49:07.176 [INFO][4788] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.6/26] block=192.168.78.0/26 handle="k8s-pod-network.feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667" host="ci-4344.0.0-a-92788821a5" May 27 17:49:07.207176 containerd[1720]: 2025-05-27 17:49:07.176 [INFO][4788] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.6/26] handle="k8s-pod-network.feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667" host="ci-4344.0.0-a-92788821a5" May 27 17:49:07.207176 containerd[1720]: 2025-05-27 17:49:07.177 [INFO][4788] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:49:07.207176 containerd[1720]: 2025-05-27 17:49:07.177 [INFO][4788] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.6/26] IPv6=[] ContainerID="feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667" HandleID="k8s-pod-network.feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667" Workload="ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--mbgfx-eth0" May 27 17:49:07.207881 containerd[1720]: 2025-05-27 17:49:07.179 [INFO][4768] cni-plugin/k8s.go 418: Populated endpoint ContainerID="feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667" Namespace="calico-apiserver" Pod="calico-apiserver-5645b5c46d-mbgfx" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--mbgfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--mbgfx-eth0", GenerateName:"calico-apiserver-5645b5c46d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4781aaa-4b71-483f-ac78-f18b8dcfe307", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 48, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5645b5c46d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-92788821a5", ContainerID:"", Pod:"calico-apiserver-5645b5c46d-mbgfx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.78.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b234e6140b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:49:07.207881 containerd[1720]: 2025-05-27 17:49:07.180 [INFO][4768] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.6/32] ContainerID="feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667" Namespace="calico-apiserver" Pod="calico-apiserver-5645b5c46d-mbgfx" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--mbgfx-eth0" May 27 17:49:07.207881 containerd[1720]: 2025-05-27 17:49:07.180 [INFO][4768] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b234e6140b ContainerID="feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667" Namespace="calico-apiserver" Pod="calico-apiserver-5645b5c46d-mbgfx" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--mbgfx-eth0" May 27 17:49:07.207881 containerd[1720]: 2025-05-27 17:49:07.184 [INFO][4768] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667" Namespace="calico-apiserver" Pod="calico-apiserver-5645b5c46d-mbgfx" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--mbgfx-eth0" May 27 17:49:07.207881 containerd[1720]: 2025-05-27 17:49:07.185 [INFO][4768] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667" Namespace="calico-apiserver" Pod="calico-apiserver-5645b5c46d-mbgfx" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--mbgfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--mbgfx-eth0", GenerateName:"calico-apiserver-5645b5c46d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4781aaa-4b71-483f-ac78-f18b8dcfe307", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 48, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5645b5c46d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-92788821a5", ContainerID:"feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667", Pod:"calico-apiserver-5645b5c46d-mbgfx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.78.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b234e6140b", MAC:"3a:3c:33:19:42:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:49:07.207881 containerd[1720]: 2025-05-27 17:49:07.204 [INFO][4768] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667" Namespace="calico-apiserver" Pod="calico-apiserver-5645b5c46d-mbgfx" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--mbgfx-eth0" May 27 17:49:07.215640 containerd[1720]: time="2025-05-27T17:49:07.215599402Z" level=info msg="CreateContainer within sandbox \"ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:49:07.236788 containerd[1720]: time="2025-05-27T17:49:07.236762110Z" level=info msg="Container 5c38be03ab01991a02a77d463278dd3e9e242dcdceb8aaa80512c44a0a1abfd7: CDI devices from CRI Config.CDIDevices: []" May 27 17:49:07.254887 containerd[1720]: time="2025-05-27T17:49:07.254863743Z" level=info msg="connecting to shim feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667" address="unix:///run/containerd/s/680d24c86d8f339ace3887bfcc3b9ed3b49a5d40397550124d58c42c827a74d3" namespace=k8s.io protocol=ttrpc version=3 May 27 17:49:07.263132 containerd[1720]: time="2025-05-27T17:49:07.263104079Z" level=info msg="CreateContainer within sandbox \"ea25d6a449907bb1ae8075c668c336d472f0c2ded1d5a4b7a0cb0c0be7149912\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5c38be03ab01991a02a77d463278dd3e9e242dcdceb8aaa80512c44a0a1abfd7\"" May 27 17:49:07.264160 containerd[1720]: time="2025-05-27T17:49:07.264143210Z" level=info msg="StartContainer for \"5c38be03ab01991a02a77d463278dd3e9e242dcdceb8aaa80512c44a0a1abfd7\"" May 27 17:49:07.265223 containerd[1720]: time="2025-05-27T17:49:07.265200425Z" level=info msg="connecting to shim 5c38be03ab01991a02a77d463278dd3e9e242dcdceb8aaa80512c44a0a1abfd7" address="unix:///run/containerd/s/56b206f4b99e1ce9bf0ca46181dea5f593b6cf92cbda60aa9666235cdab73d8c" protocol=ttrpc version=3 May 27 17:49:07.280668 systemd[1]: Started cri-containerd-feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667.scope - libcontainer container feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667. May 27 17:49:07.284266 systemd[1]: Started cri-containerd-5c38be03ab01991a02a77d463278dd3e9e242dcdceb8aaa80512c44a0a1abfd7.scope - libcontainer container 5c38be03ab01991a02a77d463278dd3e9e242dcdceb8aaa80512c44a0a1abfd7. May 27 17:49:07.310083 containerd[1720]: time="2025-05-27T17:49:07.310059303Z" level=info msg="StartContainer for \"5c38be03ab01991a02a77d463278dd3e9e242dcdceb8aaa80512c44a0a1abfd7\" returns successfully" May 27 17:49:07.335017 containerd[1720]: time="2025-05-27T17:49:07.334985231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5645b5c46d-mbgfx,Uid:d4781aaa-4b71-483f-ac78-f18b8dcfe307,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667\"" May 27 17:49:07.383693 systemd-networkd[1358]: calie87141114b8: Gained IPv6LL May 27 17:49:07.383888 systemd-networkd[1358]: cali739b4ce6449: Gained IPv6LL May 27 17:49:07.648149 containerd[1720]: time="2025-05-27T17:49:07.648090085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:49:07.650503 containerd[1720]: time="2025-05-27T17:49:07.650473471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 27 17:49:07.652983 containerd[1720]: time="2025-05-27T17:49:07.652949977Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:49:07.655716 containerd[1720]: time="2025-05-27T17:49:07.655661471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:49:07.656004 containerd[1720]: time="2025-05-27T17:49:07.655943431Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.432851131s" May 27 17:49:07.656004 containerd[1720]: time="2025-05-27T17:49:07.655968193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 27 17:49:07.657005 containerd[1720]: time="2025-05-27T17:49:07.656814508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 17:49:07.662317 containerd[1720]: time="2025-05-27T17:49:07.662296870Z" level=info msg="CreateContainer within sandbox \"87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 17:49:07.677873 containerd[1720]: time="2025-05-27T17:49:07.677854255Z" level=info msg="Container a775ac26421e4b6969a91b6a5f8b15a3ee044565140cde74bf3c4159fdb9569a: CDI devices from CRI Config.CDIDevices: []" May 27 17:49:07.704977 containerd[1720]: time="2025-05-27T17:49:07.704955338Z" level=info msg="CreateContainer within sandbox \"87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a775ac26421e4b6969a91b6a5f8b15a3ee044565140cde74bf3c4159fdb9569a\"" May 27 17:49:07.705479 containerd[1720]: time="2025-05-27T17:49:07.705388159Z" level=info msg="StartContainer for \"a775ac26421e4b6969a91b6a5f8b15a3ee044565140cde74bf3c4159fdb9569a\"" May 27 17:49:07.706737 containerd[1720]: time="2025-05-27T17:49:07.706714670Z" level=info msg="connecting to shim a775ac26421e4b6969a91b6a5f8b15a3ee044565140cde74bf3c4159fdb9569a" address="unix:///run/containerd/s/dae59ba2a310a2c499ef3d449eb8d1a4b3cad3b3ccb8ea84daf3543afe969fd5" protocol=ttrpc version=3 May 27 17:49:07.719668 systemd[1]: Started cri-containerd-a775ac26421e4b6969a91b6a5f8b15a3ee044565140cde74bf3c4159fdb9569a.scope - libcontainer container a775ac26421e4b6969a91b6a5f8b15a3ee044565140cde74bf3c4159fdb9569a. May 27 17:49:07.745782 containerd[1720]: time="2025-05-27T17:49:07.745761596Z" level=info msg="StartContainer for \"a775ac26421e4b6969a91b6a5f8b15a3ee044565140cde74bf3c4159fdb9569a\" returns successfully" May 27 17:49:07.941276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1224827462.mount: Deactivated successfully. May 27 17:49:07.959620 systemd-networkd[1358]: cali1764e449954: Gained IPv6LL May 27 17:49:08.078906 kubelet[3133]: I0527 17:49:08.078273 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-m4kcv" podStartSLOduration=39.078256811 podStartE2EDuration="39.078256811s" podCreationTimestamp="2025-05-27 17:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:49:08.078105826 +0000 UTC m=+45.231586135" watchObservedRunningTime="2025-05-27 17:49:08.078256811 +0000 UTC m=+45.231737121" May 27 17:49:08.471662 systemd-networkd[1358]: calie4e11dbaa16: Gained IPv6LL May 27 17:49:08.930556 containerd[1720]: time="2025-05-27T17:49:08.930077401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5645b5c46d-trkx5,Uid:fdd8d26e-edda-4b13-b4da-f6328fc5d832,Namespace:calico-apiserver,Attempt:0,}" May 27 17:49:08.931073 containerd[1720]: time="2025-05-27T17:49:08.931052505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-rdk7t,Uid:809f7cb6-fbee-4102-b430-229c080e87f0,Namespace:calico-system,Attempt:0,}" May 27 17:49:09.086713 systemd-networkd[1358]: cali805595ff054: Link UP May 27 17:49:09.089133 systemd-networkd[1358]: cali805595ff054: Gained carrier May 27 17:49:09.102557 containerd[1720]: 2025-05-27 17:49:08.990 [INFO][5008] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:49:09.102557 containerd[1720]: 2025-05-27 17:49:09.010 [INFO][5008] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--trkx5-eth0 calico-apiserver-5645b5c46d- calico-apiserver fdd8d26e-edda-4b13-b4da-f6328fc5d832 821 0 2025-05-27 17:48:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5645b5c46d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.0.0-a-92788821a5 calico-apiserver-5645b5c46d-trkx5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali805595ff054 [] [] }} ContainerID="0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7" Namespace="calico-apiserver" Pod="calico-apiserver-5645b5c46d-trkx5" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--trkx5-" May 27 17:49:09.102557 containerd[1720]: 2025-05-27 17:49:09.010 [INFO][5008] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7" Namespace="calico-apiserver" Pod="calico-apiserver-5645b5c46d-trkx5" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--trkx5-eth0" May 27 17:49:09.102557 containerd[1720]: 2025-05-27 17:49:09.046 [INFO][5039] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7" HandleID="k8s-pod-network.0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7" Workload="ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--trkx5-eth0" May 27 17:49:09.102557 containerd[1720]: 2025-05-27 17:49:09.046 [INFO][5039] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7" HandleID="k8s-pod-network.0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7" Workload="ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--trkx5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9200), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.0.0-a-92788821a5", "pod":"calico-apiserver-5645b5c46d-trkx5", "timestamp":"2025-05-27 17:49:09.046000404 +0000 UTC"}, Hostname:"ci-4344.0.0-a-92788821a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:49:09.102557 containerd[1720]: 2025-05-27 17:49:09.046 [INFO][5039] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:49:09.102557 containerd[1720]: 2025-05-27 17:49:09.046 [INFO][5039] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:49:09.102557 containerd[1720]: 2025-05-27 17:49:09.046 [INFO][5039] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-92788821a5' May 27 17:49:09.102557 containerd[1720]: 2025-05-27 17:49:09.055 [INFO][5039] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7" host="ci-4344.0.0-a-92788821a5" May 27 17:49:09.102557 containerd[1720]: 2025-05-27 17:49:09.059 [INFO][5039] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-92788821a5" May 27 17:49:09.102557 containerd[1720]: 2025-05-27 17:49:09.064 [INFO][5039] ipam/ipam.go 511: Trying affinity for 192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:09.102557 containerd[1720]: 2025-05-27 17:49:09.065 [INFO][5039] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:09.102557 containerd[1720]: 2025-05-27 17:49:09.066 [INFO][5039] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:09.102557 containerd[1720]: 2025-05-27 17:49:09.067 [INFO][5039] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.0/26 handle="k8s-pod-network.0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7" host="ci-4344.0.0-a-92788821a5" May 27 17:49:09.102557 containerd[1720]: 2025-05-27 17:49:09.067 [INFO][5039] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7 May 27 17:49:09.102557 containerd[1720]: 2025-05-27 17:49:09.072 [INFO][5039] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.0/26 handle="k8s-pod-network.0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7" host="ci-4344.0.0-a-92788821a5" May 27 17:49:09.102557 containerd[1720]: 2025-05-27 17:49:09.081 [INFO][5039] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.7/26] block=192.168.78.0/26 handle="k8s-pod-network.0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7" host="ci-4344.0.0-a-92788821a5" May 27 17:49:09.102557 containerd[1720]: 2025-05-27 17:49:09.081 [INFO][5039] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.7/26] handle="k8s-pod-network.0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7" host="ci-4344.0.0-a-92788821a5" May 27 17:49:09.102557 containerd[1720]: 2025-05-27 17:49:09.081 [INFO][5039] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:49:09.102557 containerd[1720]: 2025-05-27 17:49:09.081 [INFO][5039] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.7/26] IPv6=[] ContainerID="0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7" HandleID="k8s-pod-network.0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7" Workload="ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--trkx5-eth0" May 27 17:49:09.104126 containerd[1720]: 2025-05-27 17:49:09.083 [INFO][5008] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7" Namespace="calico-apiserver" Pod="calico-apiserver-5645b5c46d-trkx5" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--trkx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--trkx5-eth0", GenerateName:"calico-apiserver-5645b5c46d-", Namespace:"calico-apiserver", SelfLink:"", UID:"fdd8d26e-edda-4b13-b4da-f6328fc5d832", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 48, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5645b5c46d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-92788821a5", ContainerID:"", Pod:"calico-apiserver-5645b5c46d-trkx5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.78.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali805595ff054", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:49:09.104126 containerd[1720]: 2025-05-27 17:49:09.083 [INFO][5008] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.7/32] ContainerID="0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7" Namespace="calico-apiserver" Pod="calico-apiserver-5645b5c46d-trkx5" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--trkx5-eth0" May 27 17:49:09.104126 containerd[1720]: 2025-05-27 17:49:09.083 [INFO][5008] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali805595ff054 ContainerID="0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7" Namespace="calico-apiserver" Pod="calico-apiserver-5645b5c46d-trkx5" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--trkx5-eth0" May 27 17:49:09.104126 containerd[1720]: 2025-05-27 17:49:09.089 [INFO][5008] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7" Namespace="calico-apiserver" Pod="calico-apiserver-5645b5c46d-trkx5" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--trkx5-eth0" May 27 17:49:09.104126 containerd[1720]: 2025-05-27 17:49:09.090 [INFO][5008] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7" Namespace="calico-apiserver" Pod="calico-apiserver-5645b5c46d-trkx5" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--trkx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--trkx5-eth0", GenerateName:"calico-apiserver-5645b5c46d-", Namespace:"calico-apiserver", SelfLink:"", UID:"fdd8d26e-edda-4b13-b4da-f6328fc5d832", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 48, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5645b5c46d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-92788821a5", ContainerID:"0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7", Pod:"calico-apiserver-5645b5c46d-trkx5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.78.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali805595ff054", MAC:"9e:4d:01:2c:6d:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:49:09.104126 containerd[1720]: 2025-05-27 17:49:09.100 [INFO][5008] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7" Namespace="calico-apiserver" Pod="calico-apiserver-5645b5c46d-trkx5" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-calico--apiserver--5645b5c46d--trkx5-eth0" May 27 17:49:09.111612 systemd-networkd[1358]: cali7b234e6140b: Gained IPv6LL May 27 17:49:09.182560 systemd-networkd[1358]: cali67a35c18ac3: Link UP May 27 17:49:09.183037 systemd-networkd[1358]: cali67a35c18ac3: Gained carrier May 27 17:49:09.193826 containerd[1720]: 2025-05-27 17:49:08.991 [INFO][5009] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:49:09.193826 containerd[1720]: 2025-05-27 17:49:09.006 [INFO][5009] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--92788821a5-k8s-goldmane--78d55f7ddc--rdk7t-eth0 goldmane-78d55f7ddc- calico-system 809f7cb6-fbee-4102-b430-229c080e87f0 823 0 2025-05-27 17:48:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344.0.0-a-92788821a5 goldmane-78d55f7ddc-rdk7t eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali67a35c18ac3 [] [] }} ContainerID="45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494" Namespace="calico-system" Pod="goldmane-78d55f7ddc-rdk7t" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-goldmane--78d55f7ddc--rdk7t-" May 27 17:49:09.193826 containerd[1720]: 2025-05-27 17:49:09.006 [INFO][5009] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494" Namespace="calico-system" Pod="goldmane-78d55f7ddc-rdk7t" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-goldmane--78d55f7ddc--rdk7t-eth0" May 27 17:49:09.193826 containerd[1720]: 2025-05-27 17:49:09.049 [INFO][5041] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494" HandleID="k8s-pod-network.45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494" Workload="ci--4344.0.0--a--92788821a5-k8s-goldmane--78d55f7ddc--rdk7t-eth0" May 27 17:49:09.193826 containerd[1720]: 2025-05-27 17:49:09.049 [INFO][5041] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494" HandleID="k8s-pod-network.45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494" Workload="ci--4344.0.0--a--92788821a5-k8s-goldmane--78d55f7ddc--rdk7t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9640), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.0.0-a-92788821a5", "pod":"goldmane-78d55f7ddc-rdk7t", "timestamp":"2025-05-27 17:49:09.049732175 +0000 UTC"}, Hostname:"ci-4344.0.0-a-92788821a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:49:09.193826 containerd[1720]: 2025-05-27 17:49:09.049 [INFO][5041] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:49:09.193826 containerd[1720]: 2025-05-27 17:49:09.081 [INFO][5041] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:49:09.193826 containerd[1720]: 2025-05-27 17:49:09.082 [INFO][5041] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-92788821a5' May 27 17:49:09.193826 containerd[1720]: 2025-05-27 17:49:09.155 [INFO][5041] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494" host="ci-4344.0.0-a-92788821a5" May 27 17:49:09.193826 containerd[1720]: 2025-05-27 17:49:09.158 [INFO][5041] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-92788821a5" May 27 17:49:09.193826 containerd[1720]: 2025-05-27 17:49:09.163 [INFO][5041] ipam/ipam.go 511: Trying affinity for 192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:09.193826 containerd[1720]: 2025-05-27 17:49:09.164 [INFO][5041] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:09.193826 containerd[1720]: 2025-05-27 17:49:09.166 [INFO][5041] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.0/26 host="ci-4344.0.0-a-92788821a5" May 27 17:49:09.193826 containerd[1720]: 2025-05-27 17:49:09.166 [INFO][5041] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.0/26 handle="k8s-pod-network.45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494" host="ci-4344.0.0-a-92788821a5" May 27 17:49:09.193826 containerd[1720]: 2025-05-27 17:49:09.167 [INFO][5041] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494 May 27 17:49:09.193826 containerd[1720]: 2025-05-27 17:49:09.172 [INFO][5041] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.0/26 handle="k8s-pod-network.45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494" host="ci-4344.0.0-a-92788821a5" May 27 17:49:09.193826 containerd[1720]: 2025-05-27 17:49:09.179 [INFO][5041] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.8/26] block=192.168.78.0/26 handle="k8s-pod-network.45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494" host="ci-4344.0.0-a-92788821a5" May 27 17:49:09.193826 containerd[1720]: 2025-05-27 17:49:09.179 [INFO][5041] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.8/26] handle="k8s-pod-network.45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494" host="ci-4344.0.0-a-92788821a5" May 27 17:49:09.193826 containerd[1720]: 2025-05-27 17:49:09.179 [INFO][5041] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:49:09.193826 containerd[1720]: 2025-05-27 17:49:09.179 [INFO][5041] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.8/26] IPv6=[] ContainerID="45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494" HandleID="k8s-pod-network.45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494" Workload="ci--4344.0.0--a--92788821a5-k8s-goldmane--78d55f7ddc--rdk7t-eth0" May 27 17:49:09.194417 containerd[1720]: 2025-05-27 17:49:09.180 [INFO][5009] cni-plugin/k8s.go 418: Populated endpoint ContainerID="45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494" Namespace="calico-system" Pod="goldmane-78d55f7ddc-rdk7t" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-goldmane--78d55f7ddc--rdk7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--92788821a5-k8s-goldmane--78d55f7ddc--rdk7t-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"809f7cb6-fbee-4102-b430-229c080e87f0", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 48, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-92788821a5", ContainerID:"", Pod:"goldmane-78d55f7ddc-rdk7t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.78.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali67a35c18ac3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:49:09.194417 containerd[1720]: 2025-05-27 17:49:09.180 [INFO][5009] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.8/32] ContainerID="45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494" Namespace="calico-system" Pod="goldmane-78d55f7ddc-rdk7t" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-goldmane--78d55f7ddc--rdk7t-eth0" May 27 17:49:09.194417 containerd[1720]: 2025-05-27 17:49:09.180 [INFO][5009] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67a35c18ac3 ContainerID="45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494" Namespace="calico-system" Pod="goldmane-78d55f7ddc-rdk7t" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-goldmane--78d55f7ddc--rdk7t-eth0" May 27 17:49:09.194417 containerd[1720]: 2025-05-27 17:49:09.181 [INFO][5009] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494" Namespace="calico-system" Pod="goldmane-78d55f7ddc-rdk7t" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-goldmane--78d55f7ddc--rdk7t-eth0" May 27 17:49:09.194417 containerd[1720]: 2025-05-27 17:49:09.181 [INFO][5009] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494" Namespace="calico-system" Pod="goldmane-78d55f7ddc-rdk7t" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-goldmane--78d55f7ddc--rdk7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--92788821a5-k8s-goldmane--78d55f7ddc--rdk7t-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"809f7cb6-fbee-4102-b430-229c080e87f0", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 48, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-92788821a5", ContainerID:"45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494", Pod:"goldmane-78d55f7ddc-rdk7t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.78.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali67a35c18ac3", MAC:"86:9a:89:d8:98:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:49:09.194417 containerd[1720]: 2025-05-27 17:49:09.192 [INFO][5009] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494" Namespace="calico-system" Pod="goldmane-78d55f7ddc-rdk7t" WorkloadEndpoint="ci--4344.0.0--a--92788821a5-k8s-goldmane--78d55f7ddc--rdk7t-eth0" May 27 17:49:09.444840 containerd[1720]: time="2025-05-27T17:49:09.444746220Z" level=info msg="connecting to shim 45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494" address="unix:///run/containerd/s/8acbd4defb1880eb0162bbb7662a79ed1395d5441a30e2bca57ad4eab48da130" namespace=k8s.io protocol=ttrpc version=3 May 27 17:49:09.446990 containerd[1720]: time="2025-05-27T17:49:09.446954383Z" level=info msg="connecting to shim 0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7" address="unix:///run/containerd/s/85fb0d51baf589c654d3abae1f1044550ee39799169f0ec8fa7d20e43e501cca" namespace=k8s.io protocol=ttrpc version=3 May 27 17:49:09.479823 systemd[1]: Started cri-containerd-0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7.scope - libcontainer container 0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7. May 27 17:49:09.482432 systemd[1]: Started cri-containerd-45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494.scope - libcontainer container 45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494. May 27 17:49:09.554466 containerd[1720]: time="2025-05-27T17:49:09.554397713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5645b5c46d-trkx5,Uid:fdd8d26e-edda-4b13-b4da-f6328fc5d832,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7\"" May 27 17:49:09.567774 containerd[1720]: time="2025-05-27T17:49:09.567751724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-rdk7t,Uid:809f7cb6-fbee-4102-b430-229c080e87f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"45714a3ed0e17b019e1011bf236a043be89b6c950a5b4e16228ae0278495c494\"" May 27 17:49:09.973512 containerd[1720]: time="2025-05-27T17:49:09.973487176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:49:09.975717 containerd[1720]: time="2025-05-27T17:49:09.975686922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 27 17:49:09.977879 containerd[1720]: time="2025-05-27T17:49:09.977844676Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:49:09.980583 containerd[1720]: time="2025-05-27T17:49:09.980550507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:49:09.980877 containerd[1720]: time="2025-05-27T17:49:09.980804313Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 2.323965375s" May 27 17:49:09.980877 containerd[1720]: time="2025-05-27T17:49:09.980827774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 27 17:49:09.981874 containerd[1720]: time="2025-05-27T17:49:09.981719697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 17:49:09.996685 containerd[1720]: time="2025-05-27T17:49:09.996663475Z" level=info msg="CreateContainer within sandbox \"c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 17:49:10.011592 containerd[1720]: time="2025-05-27T17:49:10.009934937Z" level=info msg="Container 0d2f831c2f4f09369ee79be50e1f2095620cefe0506a9b4e1e9526c82bc449ed: CDI devices from CRI Config.CDIDevices: []" May 27 17:49:10.023912 containerd[1720]: time="2025-05-27T17:49:10.023889056Z" level=info msg="CreateContainer within sandbox \"c4786d7e6bcb08377ef28be6ed875aaf6727992c5c80b8576999d8745456e278\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0d2f831c2f4f09369ee79be50e1f2095620cefe0506a9b4e1e9526c82bc449ed\"" May 27 17:49:10.024270 containerd[1720]: time="2025-05-27T17:49:10.024192378Z" level=info msg="StartContainer for \"0d2f831c2f4f09369ee79be50e1f2095620cefe0506a9b4e1e9526c82bc449ed\"" May 27 17:49:10.025145 containerd[1720]: time="2025-05-27T17:49:10.025107751Z" level=info msg="connecting to shim 0d2f831c2f4f09369ee79be50e1f2095620cefe0506a9b4e1e9526c82bc449ed" address="unix:///run/containerd/s/97237c3ae273c9fc5eefffdb22d9f8a4027c5955cfa3d573b2e99783c59dc0dc" protocol=ttrpc version=3 May 27 17:49:10.042655 systemd[1]: Started cri-containerd-0d2f831c2f4f09369ee79be50e1f2095620cefe0506a9b4e1e9526c82bc449ed.scope - libcontainer container 0d2f831c2f4f09369ee79be50e1f2095620cefe0506a9b4e1e9526c82bc449ed. May 27 17:49:10.089282 containerd[1720]: time="2025-05-27T17:49:10.089235860Z" level=info msg="StartContainer for \"0d2f831c2f4f09369ee79be50e1f2095620cefe0506a9b4e1e9526c82bc449ed\" returns successfully" May 27 17:49:10.391772 systemd-networkd[1358]: cali67a35c18ac3: Gained IPv6LL May 27 17:49:10.612401 kubelet[3133]: I0527 17:49:10.612207 3133 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:49:10.967715 systemd-networkd[1358]: cali805595ff054: Gained IPv6LL May 27 17:49:11.130955 containerd[1720]: time="2025-05-27T17:49:11.130926812Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d2f831c2f4f09369ee79be50e1f2095620cefe0506a9b4e1e9526c82bc449ed\" id:\"77864d9239a118e8edaf3f528c6982b8b554d3763f71572a7620db051a8afb7b\" pid:5279 exited_at:{seconds:1748368151 nanos:130688101}" May 27 17:49:11.143714 kubelet[3133]: I0527 17:49:11.143597 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-f44587b47-pwrq8" podStartSLOduration=24.48595966 podStartE2EDuration="28.143481005s" podCreationTimestamp="2025-05-27 17:48:43 +0000 UTC" firstStartedPulling="2025-05-27 17:49:06.323780064 +0000 UTC m=+43.477260371" lastFinishedPulling="2025-05-27 17:49:09.981301412 +0000 UTC m=+47.134781716" observedRunningTime="2025-05-27 17:49:11.09547841 +0000 UTC m=+48.248958719" watchObservedRunningTime="2025-05-27 17:49:11.143481005 +0000 UTC m=+48.296961313" May 27 17:49:11.314378 systemd-networkd[1358]: vxlan.calico: Link UP May 27 17:49:11.314385 systemd-networkd[1358]: vxlan.calico: Gained carrier May 27 17:49:12.569106 systemd-networkd[1358]: vxlan.calico: Gained IPv6LL May 27 17:49:12.724144 containerd[1720]: time="2025-05-27T17:49:12.724111596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:49:12.726309 containerd[1720]: time="2025-05-27T17:49:12.726276770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 27 17:49:12.729915 containerd[1720]: time="2025-05-27T17:49:12.729861816Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:49:12.733600 containerd[1720]: time="2025-05-27T17:49:12.733519537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:49:12.733963 containerd[1720]: time="2025-05-27T17:49:12.733857303Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 2.752114683s" May 27 17:49:12.733963 containerd[1720]: time="2025-05-27T17:49:12.733885431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 17:49:12.734638 containerd[1720]: time="2025-05-27T17:49:12.734620289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 17:49:12.739844 containerd[1720]: time="2025-05-27T17:49:12.739821845Z" level=info msg="CreateContainer within sandbox \"feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 17:49:12.760179 containerd[1720]: time="2025-05-27T17:49:12.760089591Z" level=info msg="Container c6abf212f001cd3d1cf98ff0c288ed3bc8486e43167d91e20a3a9f69b5212f10: CDI devices from CRI Config.CDIDevices: []" May 27 17:49:12.774501 containerd[1720]: time="2025-05-27T17:49:12.774478457Z" level=info msg="CreateContainer within sandbox \"feaa494dace5adc4fcdb533156c6e64bcc1f711bf9d61a9307c5cccd9abf0667\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c6abf212f001cd3d1cf98ff0c288ed3bc8486e43167d91e20a3a9f69b5212f10\"" May 27 17:49:12.774939 containerd[1720]: time="2025-05-27T17:49:12.774921020Z" level=info msg="StartContainer for \"c6abf212f001cd3d1cf98ff0c288ed3bc8486e43167d91e20a3a9f69b5212f10\"" May 27 17:49:12.775947 containerd[1720]: time="2025-05-27T17:49:12.775821698Z" level=info msg="connecting to shim c6abf212f001cd3d1cf98ff0c288ed3bc8486e43167d91e20a3a9f69b5212f10" address="unix:///run/containerd/s/680d24c86d8f339ace3887bfcc3b9ed3b49a5d40397550124d58c42c827a74d3" protocol=ttrpc version=3 May 27 17:49:12.793703 systemd[1]: Started cri-containerd-c6abf212f001cd3d1cf98ff0c288ed3bc8486e43167d91e20a3a9f69b5212f10.scope - libcontainer container c6abf212f001cd3d1cf98ff0c288ed3bc8486e43167d91e20a3a9f69b5212f10. May 27 17:49:12.834421 containerd[1720]: time="2025-05-27T17:49:12.834315110Z" level=info msg="StartContainer for \"c6abf212f001cd3d1cf98ff0c288ed3bc8486e43167d91e20a3a9f69b5212f10\" returns successfully" May 27 17:49:13.105009 kubelet[3133]: I0527 17:49:13.104505 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5645b5c46d-mbgfx" podStartSLOduration=27.706344574 podStartE2EDuration="33.104489346s" podCreationTimestamp="2025-05-27 17:48:40 +0000 UTC" firstStartedPulling="2025-05-27 17:49:07.3363828 +0000 UTC m=+44.489863110" lastFinishedPulling="2025-05-27 17:49:12.734527569 +0000 UTC m=+49.888007882" observedRunningTime="2025-05-27 17:49:13.104097549 +0000 UTC m=+50.257577859" watchObservedRunningTime="2025-05-27 17:49:13.104489346 +0000 UTC m=+50.257969656" May 27 17:49:14.092733 kubelet[3133]: I0527 17:49:14.092704 3133 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:49:16.358001 kubelet[3133]: I0527 17:49:16.357930 3133 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:49:16.763198 containerd[1720]: time="2025-05-27T17:49:16.763159388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:49:16.765106 containerd[1720]: time="2025-05-27T17:49:16.765072665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 27 17:49:16.771440 containerd[1720]: time="2025-05-27T17:49:16.771400431Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:49:16.775221 containerd[1720]: time="2025-05-27T17:49:16.775181131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:49:16.775754 containerd[1720]: time="2025-05-27T17:49:16.775497092Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 4.040850212s" May 27 17:49:16.775754 containerd[1720]: time="2025-05-27T17:49:16.775523670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 27 17:49:16.776637 containerd[1720]: time="2025-05-27T17:49:16.776259212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 17:49:16.781513 containerd[1720]: time="2025-05-27T17:49:16.781490756Z" level=info msg="CreateContainer within sandbox \"87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 17:49:16.801651 containerd[1720]: time="2025-05-27T17:49:16.800629273Z" level=info msg="Container 32af290c31b63e083f40b69092beca11b57c6e8764276f6b2d9c36f74f60b7d7: CDI devices from CRI Config.CDIDevices: []" May 27 17:49:16.814808 containerd[1720]: time="2025-05-27T17:49:16.814786854Z" level=info msg="CreateContainer within sandbox \"87c15048b478a644622f66f162f98e2958dfb1e40bf6566c02d5bafb4f4a4f4a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"32af290c31b63e083f40b69092beca11b57c6e8764276f6b2d9c36f74f60b7d7\"" May 27 17:49:16.815394 containerd[1720]: time="2025-05-27T17:49:16.815201908Z" level=info msg="StartContainer for \"32af290c31b63e083f40b69092beca11b57c6e8764276f6b2d9c36f74f60b7d7\"" May 27 17:49:16.816553 containerd[1720]: time="2025-05-27T17:49:16.816504829Z" level=info msg="connecting to shim 32af290c31b63e083f40b69092beca11b57c6e8764276f6b2d9c36f74f60b7d7" address="unix:///run/containerd/s/dae59ba2a310a2c499ef3d449eb8d1a4b3cad3b3ccb8ea84daf3543afe969fd5" protocol=ttrpc version=3 May 27 17:49:16.835676 systemd[1]: Started cri-containerd-32af290c31b63e083f40b69092beca11b57c6e8764276f6b2d9c36f74f60b7d7.scope - libcontainer container 32af290c31b63e083f40b69092beca11b57c6e8764276f6b2d9c36f74f60b7d7. May 27 17:49:16.863684 containerd[1720]: time="2025-05-27T17:49:16.863631770Z" level=info msg="StartContainer for \"32af290c31b63e083f40b69092beca11b57c6e8764276f6b2d9c36f74f60b7d7\" returns successfully" May 27 17:49:17.008971 kubelet[3133]: I0527 17:49:17.008949 3133 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 17:49:17.009060 kubelet[3133]: I0527 17:49:17.008984 3133 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 17:49:17.088668 containerd[1720]: time="2025-05-27T17:49:17.088593297Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:49:17.090656 containerd[1720]: time="2025-05-27T17:49:17.090619891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 27 17:49:17.091886 containerd[1720]: time="2025-05-27T17:49:17.091865646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 315.574196ms" May 27 17:49:17.091943 containerd[1720]: time="2025-05-27T17:49:17.091888188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 17:49:17.092939 containerd[1720]: time="2025-05-27T17:49:17.092829871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:49:17.099706 containerd[1720]: time="2025-05-27T17:49:17.099687948Z" level=info msg="CreateContainer within sandbox \"0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 17:49:17.111988 kubelet[3133]: I0527 17:49:17.111936 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-k8l7g" podStartSLOduration=23.558671809 podStartE2EDuration="34.111919352s" podCreationTimestamp="2025-05-27 17:48:43 +0000 UTC" firstStartedPulling="2025-05-27 17:49:06.222882542 +0000 UTC m=+43.376362852" lastFinishedPulling="2025-05-27 17:49:16.776130088 +0000 UTC m=+53.929610395" observedRunningTime="2025-05-27 17:49:17.11131791 +0000 UTC m=+54.264798219" watchObservedRunningTime="2025-05-27 17:49:17.111919352 +0000 UTC m=+54.265399668" May 27 17:49:17.117592 containerd[1720]: time="2025-05-27T17:49:17.116827237Z" level=info msg="Container 09ffe1981a5fd9a022966bfa87376e50a58aeaa4636f740df67377e04f5684f4: CDI devices from CRI Config.CDIDevices: []" May 27 17:49:17.133235 containerd[1720]: time="2025-05-27T17:49:17.133214361Z" level=info msg="CreateContainer within sandbox \"0e6a83a6c85724f83d28c2afe9b8cfdfb3b332976d732a95f4a82e954d931ee7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"09ffe1981a5fd9a022966bfa87376e50a58aeaa4636f740df67377e04f5684f4\"" May 27 17:49:17.133678 containerd[1720]: time="2025-05-27T17:49:17.133641344Z" level=info msg="StartContainer for \"09ffe1981a5fd9a022966bfa87376e50a58aeaa4636f740df67377e04f5684f4\"" May 27 17:49:17.134714 containerd[1720]: time="2025-05-27T17:49:17.134662634Z" level=info msg="connecting to shim 09ffe1981a5fd9a022966bfa87376e50a58aeaa4636f740df67377e04f5684f4" address="unix:///run/containerd/s/85fb0d51baf589c654d3abae1f1044550ee39799169f0ec8fa7d20e43e501cca" protocol=ttrpc version=3 May 27 17:49:17.149662 systemd[1]: Started cri-containerd-09ffe1981a5fd9a022966bfa87376e50a58aeaa4636f740df67377e04f5684f4.scope - libcontainer container 09ffe1981a5fd9a022966bfa87376e50a58aeaa4636f740df67377e04f5684f4. May 27 17:49:17.187374 containerd[1720]: time="2025-05-27T17:49:17.187349945Z" level=info msg="StartContainer for \"09ffe1981a5fd9a022966bfa87376e50a58aeaa4636f740df67377e04f5684f4\" returns successfully" May 27 17:49:17.281188 containerd[1720]: time="2025-05-27T17:49:17.281144733Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:49:17.283283 containerd[1720]: time="2025-05-27T17:49:17.283257126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:49:17.283368 containerd[1720]: time="2025-05-27T17:49:17.283329819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:49:17.283476 kubelet[3133]: E0527 17:49:17.283449 3133 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:49:17.283540 kubelet[3133]: E0527 17:49:17.283490 3133 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:49:17.283844 kubelet[3133]: E0527 17:49:17.283765 3133 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h9hx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-rdk7t_calico-system(809f7cb6-fbee-4102-b430-229c080e87f0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:49:17.284578 containerd[1720]: time="2025-05-27T17:49:17.284088198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:49:17.285850 kubelet[3133]: E0527 17:49:17.285737 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-rdk7t" podUID="809f7cb6-fbee-4102-b430-229c080e87f0" May 27 17:49:17.453734 containerd[1720]: time="2025-05-27T17:49:17.453693691Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:49:17.456060 containerd[1720]: time="2025-05-27T17:49:17.456015454Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:49:17.456060 containerd[1720]: time="2025-05-27T17:49:17.456040513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:49:17.456244 kubelet[3133]: E0527 17:49:17.456202 3133 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:49:17.456507 kubelet[3133]: E0527 17:49:17.456256 3133 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:49:17.456507 kubelet[3133]: E0527 17:49:17.456460 3133 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fe456007beb94267844a80ddd099b1f3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ljrzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bcdcfc6f8-4mnvt_calico-system(0af188af-bed9-4f26-9fd6-cb97993cd253): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:49:17.458439 containerd[1720]: time="2025-05-27T17:49:17.458408362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:49:17.632710 containerd[1720]: time="2025-05-27T17:49:17.632662899Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:49:17.635117 containerd[1720]: time="2025-05-27T17:49:17.635031688Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:49:17.635273 containerd[1720]: time="2025-05-27T17:49:17.635138096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:49:17.635418 kubelet[3133]: E0527 17:49:17.635317 3133 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:49:17.635418 kubelet[3133]: E0527 17:49:17.635387 3133 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:49:17.635632 kubelet[3133]: E0527 17:49:17.635505 3133 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljrzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bcdcfc6f8-4mnvt_calico-system(0af188af-bed9-4f26-9fd6-cb97993cd253): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:49:17.637208 kubelet[3133]: E0527 17:49:17.637173 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7bcdcfc6f8-4mnvt" podUID="0af188af-bed9-4f26-9fd6-cb97993cd253" May 27 17:49:18.104464 kubelet[3133]: E0527 17:49:18.104433 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-rdk7t" podUID="809f7cb6-fbee-4102-b430-229c080e87f0" May 27 17:49:18.128319 kubelet[3133]: I0527 17:49:18.127741 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5645b5c46d-trkx5" podStartSLOduration=30.591190081 podStartE2EDuration="38.127727538s" podCreationTimestamp="2025-05-27 17:48:40 +0000 UTC" firstStartedPulling="2025-05-27 17:49:09.555934645 +0000 UTC m=+46.709414959" lastFinishedPulling="2025-05-27 17:49:17.092472103 +0000 UTC m=+54.245952416" observedRunningTime="2025-05-27 17:49:18.127467082 +0000 UTC m=+55.280947398" watchObservedRunningTime="2025-05-27 17:49:18.127727538 +0000 UTC m=+55.281207848" May 27 17:49:19.105985 kubelet[3133]: I0527 17:49:19.105957 3133 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:49:28.931332 kubelet[3133]: E0527 17:49:28.931164 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7bcdcfc6f8-4mnvt" podUID="0af188af-bed9-4f26-9fd6-cb97993cd253" May 27 17:49:29.930834 containerd[1720]: time="2025-05-27T17:49:29.930669939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:49:30.096226 containerd[1720]: time="2025-05-27T17:49:30.096180353Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:49:30.098937 containerd[1720]: time="2025-05-27T17:49:30.098907519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:49:30.099042 containerd[1720]: time="2025-05-27T17:49:30.098972001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:49:30.099096 kubelet[3133]: E0527 17:49:30.099065 3133 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:49:30.099595 kubelet[3133]: E0527 17:49:30.099105 3133 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:49:30.099595 kubelet[3133]: E0527 17:49:30.099231 3133 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h9hx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-rdk7t_calico-system(809f7cb6-fbee-4102-b430-229c080e87f0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:49:30.100416 kubelet[3133]: E0527 17:49:30.100381 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-rdk7t" podUID="809f7cb6-fbee-4102-b430-229c080e87f0" May 27 17:49:31.109752 containerd[1720]: time="2025-05-27T17:49:31.109717569Z" level=info msg="TaskExit event in podsandbox handler container_id:\"96f844e49c4cc338215c6ea696b3a4a9532c8febc188a6350ad82ad54665ca30\" id:\"9e0a797bbe182f0f0b5585f333536e144bcc0b7335b42288ab45bf6da1d43598\" pid:5567 exited_at:{seconds:1748368171 nanos:109475052}" May 27 17:49:31.169210 containerd[1720]: time="2025-05-27T17:49:31.169179508Z" level=info msg="TaskExit event in podsandbox handler container_id:\"96f844e49c4cc338215c6ea696b3a4a9532c8febc188a6350ad82ad54665ca30\" id:\"fa4c2430674a9908d43ccb8cbb7061c32ed038a9a09d7ad65dc211f76fc0b54d\" pid:5592 exited_at:{seconds:1748368171 nanos:169006459}" May 27 17:49:32.196491 kubelet[3133]: I0527 17:49:32.196207 3133 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:49:41.115500 containerd[1720]: time="2025-05-27T17:49:41.115428889Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d2f831c2f4f09369ee79be50e1f2095620cefe0506a9b4e1e9526c82bc449ed\" id:\"05be8d47737545b0499144516deef218fa5234d488dcd43235c8af76fa69aece\" pid:5630 exited_at:{seconds:1748368181 nanos:115225184}" May 27 17:49:41.931492 containerd[1720]: time="2025-05-27T17:49:41.931433505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:49:42.099818 containerd[1720]: time="2025-05-27T17:49:42.099773775Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:49:42.102593 containerd[1720]: time="2025-05-27T17:49:42.102487359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:49:42.102773 containerd[1720]: time="2025-05-27T17:49:42.102560096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:49:42.102896 kubelet[3133]: E0527 17:49:42.102829 3133 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:49:42.103691 kubelet[3133]: E0527 17:49:42.102964 3133 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:49:42.103691 kubelet[3133]: E0527 17:49:42.103090 3133 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fe456007beb94267844a80ddd099b1f3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ljrzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bcdcfc6f8-4mnvt_calico-system(0af188af-bed9-4f26-9fd6-cb97993cd253): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:49:42.105260 containerd[1720]: time="2025-05-27T17:49:42.105151840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:49:42.281435 containerd[1720]: time="2025-05-27T17:49:42.281279829Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:49:42.285890 containerd[1720]: time="2025-05-27T17:49:42.285833759Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:49:42.285890 containerd[1720]: time="2025-05-27T17:49:42.285860907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:49:42.286016 kubelet[3133]: E0527 17:49:42.285980 3133 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:49:42.286067 kubelet[3133]: E0527 17:49:42.286016 3133 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:49:42.286165 kubelet[3133]: E0527 17:49:42.286118 3133 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljrzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bcdcfc6f8-4mnvt_calico-system(0af188af-bed9-4f26-9fd6-cb97993cd253): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:49:42.288000 kubelet[3133]: E0527 17:49:42.287969 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7bcdcfc6f8-4mnvt" podUID="0af188af-bed9-4f26-9fd6-cb97993cd253" May 27 17:49:42.932593 kubelet[3133]: E0527 17:49:42.932560 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-rdk7t" podUID="809f7cb6-fbee-4102-b430-229c080e87f0" May 27 17:49:54.933291 kubelet[3133]: E0527 17:49:54.932717 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7bcdcfc6f8-4mnvt" podUID="0af188af-bed9-4f26-9fd6-cb97993cd253" May 27 17:49:56.931289 containerd[1720]: time="2025-05-27T17:49:56.931240519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:49:57.492808 containerd[1720]: time="2025-05-27T17:49:57.492747412Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:49:57.495672 containerd[1720]: time="2025-05-27T17:49:57.495584022Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:49:57.495944 containerd[1720]: time="2025-05-27T17:49:57.495832158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:49:57.496089 kubelet[3133]: E0527 17:49:57.496034 3133 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:49:57.496518 kubelet[3133]: E0527 17:49:57.496233 3133 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:49:57.497028 kubelet[3133]: E0527 17:49:57.496969 3133 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h9hx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-rdk7t_calico-system(809f7cb6-fbee-4102-b430-229c080e87f0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:49:57.498292 kubelet[3133]: E0527 17:49:57.498203 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-rdk7t" podUID="809f7cb6-fbee-4102-b430-229c080e87f0" May 27 17:50:01.166276 containerd[1720]: time="2025-05-27T17:50:01.166235495Z" level=info msg="TaskExit event in podsandbox handler container_id:\"96f844e49c4cc338215c6ea696b3a4a9532c8febc188a6350ad82ad54665ca30\" id:\"f471ad9799c85d3159b79303558747a59aa3ee2d87e4d2a214fce43d511fe048\" pid:5662 exited_at:{seconds:1748368201 nanos:165942752}" May 27 17:50:08.932934 kubelet[3133]: E0527 17:50:08.932881 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7bcdcfc6f8-4mnvt" podUID="0af188af-bed9-4f26-9fd6-cb97993cd253" May 27 17:50:10.933780 kubelet[3133]: E0527 17:50:10.933617 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-rdk7t" podUID="809f7cb6-fbee-4102-b430-229c080e87f0" May 27 17:50:11.118995 containerd[1720]: time="2025-05-27T17:50:11.118964847Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d2f831c2f4f09369ee79be50e1f2095620cefe0506a9b4e1e9526c82bc449ed\" id:\"e88d419bebca0cf6f8bc1c0b01ee645797784e4f84ab3ab4695320050690ed0d\" pid:5686 exited_at:{seconds:1748368211 nanos:118773647}" May 27 17:50:12.063889 containerd[1720]: time="2025-05-27T17:50:12.063847609Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d2f831c2f4f09369ee79be50e1f2095620cefe0506a9b4e1e9526c82bc449ed\" id:\"ac9ca96e871b1c33939631dd8e65df90b45751dede568906ec507b6b3b6e8e43\" pid:5708 exited_at:{seconds:1748368212 nanos:63299830}" May 27 17:50:15.766273 systemd[1]: Started sshd@7-10.200.8.19:22-10.200.16.10:58442.service - OpenSSH per-connection server daemon (10.200.16.10:58442). May 27 17:50:16.390134 sshd[5721]: Accepted publickey for core from 10.200.16.10 port 58442 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:50:16.391108 sshd-session[5721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:16.394480 systemd-logind[1703]: New session 10 of user core. May 27 17:50:16.402678 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 17:50:16.877886 sshd[5723]: Connection closed by 10.200.16.10 port 58442 May 27 17:50:16.878315 sshd-session[5721]: pam_unix(sshd:session): session closed for user core May 27 17:50:16.880939 systemd[1]: sshd@7-10.200.8.19:22-10.200.16.10:58442.service: Deactivated successfully. May 27 17:50:16.882590 systemd[1]: session-10.scope: Deactivated successfully. May 27 17:50:16.883373 systemd-logind[1703]: Session 10 logged out. Waiting for processes to exit. May 27 17:50:16.884468 systemd-logind[1703]: Removed session 10. May 27 17:50:19.932309 kubelet[3133]: E0527 17:50:19.932221 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7bcdcfc6f8-4mnvt" podUID="0af188af-bed9-4f26-9fd6-cb97993cd253" May 27 17:50:21.987272 systemd[1]: Started sshd@8-10.200.8.19:22-10.200.16.10:39072.service - OpenSSH per-connection server daemon (10.200.16.10:39072). May 27 17:50:22.629429 sshd[5736]: Accepted publickey for core from 10.200.16.10 port 39072 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:50:22.630429 sshd-session[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:22.633712 systemd-logind[1703]: New session 11 of user core. May 27 17:50:22.637647 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 17:50:23.138709 sshd[5738]: Connection closed by 10.200.16.10 port 39072 May 27 17:50:23.138736 sshd-session[5736]: pam_unix(sshd:session): session closed for user core May 27 17:50:23.142878 systemd[1]: sshd@8-10.200.8.19:22-10.200.16.10:39072.service: Deactivated successfully. May 27 17:50:23.145660 systemd[1]: session-11.scope: Deactivated successfully. May 27 17:50:23.147408 systemd-logind[1703]: Session 11 logged out. Waiting for processes to exit. May 27 17:50:23.150336 systemd-logind[1703]: Removed session 11. May 27 17:50:23.930617 kubelet[3133]: E0527 17:50:23.930448 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-rdk7t" podUID="809f7cb6-fbee-4102-b430-229c080e87f0" May 27 17:50:28.253247 systemd[1]: Started sshd@9-10.200.8.19:22-10.200.16.10:39086.service - OpenSSH per-connection server daemon (10.200.16.10:39086). May 27 17:50:28.878786 sshd[5753]: Accepted publickey for core from 10.200.16.10 port 39086 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:50:28.879762 sshd-session[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:28.883507 systemd-logind[1703]: New session 12 of user core. May 27 17:50:28.889680 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 17:50:29.362224 sshd[5755]: Connection closed by 10.200.16.10 port 39086 May 27 17:50:29.362625 sshd-session[5753]: pam_unix(sshd:session): session closed for user core May 27 17:50:29.365030 systemd[1]: sshd@9-10.200.8.19:22-10.200.16.10:39086.service: Deactivated successfully. May 27 17:50:29.366639 systemd[1]: session-12.scope: Deactivated successfully. May 27 17:50:29.367341 systemd-logind[1703]: Session 12 logged out. Waiting for processes to exit. May 27 17:50:29.368459 systemd-logind[1703]: Removed session 12. May 27 17:50:29.478088 systemd[1]: Started sshd@10-10.200.8.19:22-10.200.16.10:51328.service - OpenSSH per-connection server daemon (10.200.16.10:51328). May 27 17:50:30.096658 sshd[5769]: Accepted publickey for core from 10.200.16.10 port 51328 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:50:30.097788 sshd-session[5769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:30.101673 systemd-logind[1703]: New session 13 of user core. May 27 17:50:30.105688 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 17:50:30.601817 sshd[5773]: Connection closed by 10.200.16.10 port 51328 May 27 17:50:30.602352 sshd-session[5769]: pam_unix(sshd:session): session closed for user core May 27 17:50:30.606011 systemd-logind[1703]: Session 13 logged out. Waiting for processes to exit. May 27 17:50:30.606618 systemd[1]: sshd@10-10.200.8.19:22-10.200.16.10:51328.service: Deactivated successfully. May 27 17:50:30.608273 systemd[1]: session-13.scope: Deactivated successfully. May 27 17:50:30.609890 systemd-logind[1703]: Removed session 13. May 27 17:50:30.716089 systemd[1]: Started sshd@11-10.200.8.19:22-10.200.16.10:51334.service - OpenSSH per-connection server daemon (10.200.16.10:51334). May 27 17:50:30.932489 containerd[1720]: time="2025-05-27T17:50:30.932237646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:50:31.120289 containerd[1720]: time="2025-05-27T17:50:31.120194824Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:50:31.125519 containerd[1720]: time="2025-05-27T17:50:31.125481994Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:50:31.125631 containerd[1720]: time="2025-05-27T17:50:31.125560955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:50:31.125696 kubelet[3133]: E0527 17:50:31.125662 3133 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:50:31.125985 kubelet[3133]: E0527 17:50:31.125707 3133 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:50:31.125985 kubelet[3133]: E0527 17:50:31.125815 3133 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fe456007beb94267844a80ddd099b1f3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ljrzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bcdcfc6f8-4mnvt_calico-system(0af188af-bed9-4f26-9fd6-cb97993cd253): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:50:31.128544 containerd[1720]: time="2025-05-27T17:50:31.128340168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:50:31.173600 containerd[1720]: time="2025-05-27T17:50:31.173580698Z" level=info msg="TaskExit event in podsandbox handler container_id:\"96f844e49c4cc338215c6ea696b3a4a9532c8febc188a6350ad82ad54665ca30\" id:\"cbc13bc03e3b9d458059f593e2b1a24d2ce33084b0e8ba04c8d4d9c802f183a7\" pid:5799 exited_at:{seconds:1748368231 nanos:173272550}" May 27 17:50:31.294375 containerd[1720]: time="2025-05-27T17:50:31.294307960Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:50:31.296417 containerd[1720]: time="2025-05-27T17:50:31.296374547Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:50:31.296559 containerd[1720]: time="2025-05-27T17:50:31.296379813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:50:31.296605 kubelet[3133]: E0527 17:50:31.296578 3133 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:50:31.296664 kubelet[3133]: E0527 17:50:31.296614 3133 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:50:31.296763 kubelet[3133]: E0527 17:50:31.296727 3133 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljrzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bcdcfc6f8-4mnvt_calico-system(0af188af-bed9-4f26-9fd6-cb97993cd253): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:50:31.298013 kubelet[3133]: E0527 17:50:31.297969 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7bcdcfc6f8-4mnvt" podUID="0af188af-bed9-4f26-9fd6-cb97993cd253" May 27 17:50:31.339201 sshd[5783]: Accepted publickey for core from 10.200.16.10 port 51334 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:50:31.340090 sshd-session[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:31.344005 systemd-logind[1703]: New session 14 of user core. May 27 17:50:31.351677 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 17:50:31.832076 sshd[5812]: Connection closed by 10.200.16.10 port 51334 May 27 17:50:31.832453 sshd-session[5783]: pam_unix(sshd:session): session closed for user core May 27 17:50:31.835294 systemd[1]: sshd@11-10.200.8.19:22-10.200.16.10:51334.service: Deactivated successfully. May 27 17:50:31.837416 systemd[1]: session-14.scope: Deactivated successfully. May 27 17:50:31.838261 systemd-logind[1703]: Session 14 logged out. Waiting for processes to exit. May 27 17:50:31.839257 systemd-logind[1703]: Removed session 14. May 27 17:50:36.947604 systemd[1]: Started sshd@12-10.200.8.19:22-10.200.16.10:51346.service - OpenSSH per-connection server daemon (10.200.16.10:51346). May 27 17:50:37.577515 sshd[5836]: Accepted publickey for core from 10.200.16.10 port 51346 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:50:37.578503 sshd-session[5836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:37.581868 systemd-logind[1703]: New session 15 of user core. May 27 17:50:37.586663 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 17:50:38.059943 sshd[5838]: Connection closed by 10.200.16.10 port 51346 May 27 17:50:38.060302 sshd-session[5836]: pam_unix(sshd:session): session closed for user core May 27 17:50:38.062846 systemd[1]: sshd@12-10.200.8.19:22-10.200.16.10:51346.service: Deactivated successfully. May 27 17:50:38.064421 systemd[1]: session-15.scope: Deactivated successfully. May 27 17:50:38.065059 systemd-logind[1703]: Session 15 logged out. Waiting for processes to exit. May 27 17:50:38.066709 systemd-logind[1703]: Removed session 15. May 27 17:50:38.931741 containerd[1720]: time="2025-05-27T17:50:38.931505510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:50:39.103082 containerd[1720]: time="2025-05-27T17:50:39.103037236Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:50:39.105473 containerd[1720]: time="2025-05-27T17:50:39.105412568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:50:39.105473 containerd[1720]: time="2025-05-27T17:50:39.105457092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:50:39.105605 kubelet[3133]: E0527 17:50:39.105571 3133 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:50:39.105828 kubelet[3133]: E0527 17:50:39.105609 3133 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:50:39.105828 kubelet[3133]: E0527 17:50:39.105738 3133 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h9hx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-rdk7t_calico-system(809f7cb6-fbee-4102-b430-229c080e87f0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:50:39.107106 kubelet[3133]: E0527 17:50:39.107079 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-rdk7t" podUID="809f7cb6-fbee-4102-b430-229c080e87f0" May 27 17:50:41.114192 containerd[1720]: time="2025-05-27T17:50:41.114157163Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d2f831c2f4f09369ee79be50e1f2095620cefe0506a9b4e1e9526c82bc449ed\" id:\"4727cacb51ca1677ec3a189ab1933a5c6d270999d2d0b8af5a707ff30c3dc7eb\" pid:5861 exited_at:{seconds:1748368241 nanos:113897596}" May 27 17:50:43.170192 systemd[1]: Started sshd@13-10.200.8.19:22-10.200.16.10:36520.service - OpenSSH per-connection server daemon (10.200.16.10:36520). May 27 17:50:43.789448 sshd[5871]: Accepted publickey for core from 10.200.16.10 port 36520 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:50:43.790309 sshd-session[5871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:43.794115 systemd-logind[1703]: New session 16 of user core. May 27 17:50:43.799699 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 17:50:44.271923 sshd[5873]: Connection closed by 10.200.16.10 port 36520 May 27 17:50:44.272279 sshd-session[5871]: pam_unix(sshd:session): session closed for user core May 27 17:50:44.274747 systemd-logind[1703]: Session 16 logged out. Waiting for processes to exit. May 27 17:50:44.274854 systemd[1]: sshd@13-10.200.8.19:22-10.200.16.10:36520.service: Deactivated successfully. May 27 17:50:44.277169 systemd[1]: session-16.scope: Deactivated successfully. May 27 17:50:44.279856 systemd-logind[1703]: Removed session 16. May 27 17:50:45.931171 kubelet[3133]: E0527 17:50:45.931116 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7bcdcfc6f8-4mnvt" podUID="0af188af-bed9-4f26-9fd6-cb97993cd253" May 27 17:50:49.384548 systemd[1]: Started sshd@14-10.200.8.19:22-10.200.16.10:39338.service - OpenSSH per-connection server daemon (10.200.16.10:39338). May 27 17:50:50.013686 sshd[5908]: Accepted publickey for core from 10.200.16.10 port 39338 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:50:50.014701 sshd-session[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:50.018600 systemd-logind[1703]: New session 17 of user core. May 27 17:50:50.022691 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 17:50:50.506522 sshd[5910]: Connection closed by 10.200.16.10 port 39338 May 27 17:50:50.506914 sshd-session[5908]: pam_unix(sshd:session): session closed for user core May 27 17:50:50.509315 systemd[1]: sshd@14-10.200.8.19:22-10.200.16.10:39338.service: Deactivated successfully. May 27 17:50:50.510860 systemd[1]: session-17.scope: Deactivated successfully. May 27 17:50:50.511575 systemd-logind[1703]: Session 17 logged out. Waiting for processes to exit. May 27 17:50:50.512529 systemd-logind[1703]: Removed session 17. May 27 17:50:51.065743 systemd[1]: Started sshd@15-10.200.8.19:22-10.200.16.10:39350.service - OpenSSH per-connection server daemon (10.200.16.10:39350). May 27 17:50:51.705030 sshd[5922]: Accepted publickey for core from 10.200.16.10 port 39350 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:50:51.705991 sshd-session[5922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:51.709591 systemd-logind[1703]: New session 18 of user core. May 27 17:50:51.715700 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 17:50:52.259169 sshd[5924]: Connection closed by 10.200.16.10 port 39350 May 27 17:50:52.258276 sshd-session[5922]: pam_unix(sshd:session): session closed for user core May 27 17:50:52.261112 systemd[1]: sshd@15-10.200.8.19:22-10.200.16.10:39350.service: Deactivated successfully. May 27 17:50:52.262726 systemd[1]: session-18.scope: Deactivated successfully. May 27 17:50:52.263409 systemd-logind[1703]: Session 18 logged out. Waiting for processes to exit. May 27 17:50:52.264331 systemd-logind[1703]: Removed session 18. May 27 17:50:53.092294 systemd[1]: Started sshd@16-10.200.8.19:22-10.200.16.10:39356.service - OpenSSH per-connection server daemon (10.200.16.10:39356). May 27 17:50:53.717837 sshd[5934]: Accepted publickey for core from 10.200.16.10 port 39356 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:50:53.718749 sshd-session[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:53.722587 systemd-logind[1703]: New session 19 of user core. May 27 17:50:53.725656 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 17:50:53.930462 kubelet[3133]: E0527 17:50:53.930433 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-rdk7t" podUID="809f7cb6-fbee-4102-b430-229c080e87f0" May 27 17:50:54.731226 sshd[5936]: Connection closed by 10.200.16.10 port 39356 May 27 17:50:54.731873 sshd-session[5934]: pam_unix(sshd:session): session closed for user core May 27 17:50:54.735422 systemd[1]: sshd@16-10.200.8.19:22-10.200.16.10:39356.service: Deactivated successfully. May 27 17:50:54.738151 systemd[1]: session-19.scope: Deactivated successfully. May 27 17:50:54.739834 systemd-logind[1703]: Session 19 logged out. Waiting for processes to exit. May 27 17:50:54.741920 systemd-logind[1703]: Removed session 19. May 27 17:50:54.846106 systemd[1]: Started sshd@17-10.200.8.19:22-10.200.16.10:39372.service - OpenSSH per-connection server daemon (10.200.16.10:39372). May 27 17:50:55.468685 sshd[5953]: Accepted publickey for core from 10.200.16.10 port 39372 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:50:55.469594 sshd-session[5953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:55.472834 systemd-logind[1703]: New session 20 of user core. May 27 17:50:55.476661 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 17:50:56.025229 sshd[5955]: Connection closed by 10.200.16.10 port 39372 May 27 17:50:56.025642 sshd-session[5953]: pam_unix(sshd:session): session closed for user core May 27 17:50:56.028044 systemd[1]: sshd@17-10.200.8.19:22-10.200.16.10:39372.service: Deactivated successfully. May 27 17:50:56.029447 systemd[1]: session-20.scope: Deactivated successfully. May 27 17:50:56.030211 systemd-logind[1703]: Session 20 logged out. Waiting for processes to exit. May 27 17:50:56.031442 systemd-logind[1703]: Removed session 20. May 27 17:50:56.149027 systemd[1]: Started sshd@18-10.200.8.19:22-10.200.16.10:39374.service - OpenSSH per-connection server daemon (10.200.16.10:39374). May 27 17:50:56.766661 sshd[5964]: Accepted publickey for core from 10.200.16.10 port 39374 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:50:56.767616 sshd-session[5964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:50:56.771504 systemd-logind[1703]: New session 21 of user core. May 27 17:50:56.776673 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 17:50:57.249524 sshd[5966]: Connection closed by 10.200.16.10 port 39374 May 27 17:50:57.250013 sshd-session[5964]: pam_unix(sshd:session): session closed for user core May 27 17:50:57.252460 systemd[1]: sshd@18-10.200.8.19:22-10.200.16.10:39374.service: Deactivated successfully. May 27 17:50:57.254873 systemd[1]: session-21.scope: Deactivated successfully. May 27 17:50:57.255629 systemd-logind[1703]: Session 21 logged out. Waiting for processes to exit. May 27 17:50:57.256461 systemd-logind[1703]: Removed session 21. May 27 17:50:59.931213 kubelet[3133]: E0527 17:50:59.931158 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7bcdcfc6f8-4mnvt" podUID="0af188af-bed9-4f26-9fd6-cb97993cd253" May 27 17:51:01.169097 containerd[1720]: time="2025-05-27T17:51:01.169055348Z" level=info msg="TaskExit event in podsandbox handler container_id:\"96f844e49c4cc338215c6ea696b3a4a9532c8febc188a6350ad82ad54665ca30\" id:\"a3ba5528955fe10a57931584a8e176b656db306d92218dd4efbd00bac0b57746\" pid:5993 exited_at:{seconds:1748368261 nanos:168772062}" May 27 17:51:02.365294 systemd[1]: Started sshd@19-10.200.8.19:22-10.200.16.10:54886.service - OpenSSH per-connection server daemon (10.200.16.10:54886). May 27 17:51:02.990469 sshd[6005]: Accepted publickey for core from 10.200.16.10 port 54886 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:51:02.991416 sshd-session[6005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:51:02.995330 systemd-logind[1703]: New session 22 of user core. May 27 17:51:02.999731 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 17:51:03.481065 sshd[6007]: Connection closed by 10.200.16.10 port 54886 May 27 17:51:03.481692 sshd-session[6005]: pam_unix(sshd:session): session closed for user core May 27 17:51:03.484076 systemd[1]: sshd@19-10.200.8.19:22-10.200.16.10:54886.service: Deactivated successfully. May 27 17:51:03.485615 systemd[1]: session-22.scope: Deactivated successfully. May 27 17:51:03.486211 systemd-logind[1703]: Session 22 logged out. Waiting for processes to exit. May 27 17:51:03.487048 systemd-logind[1703]: Removed session 22. May 27 17:51:04.931950 kubelet[3133]: E0527 17:51:04.931884 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-rdk7t" podUID="809f7cb6-fbee-4102-b430-229c080e87f0" May 27 17:51:08.597266 systemd[1]: Started sshd@20-10.200.8.19:22-10.200.16.10:40232.service - OpenSSH per-connection server daemon (10.200.16.10:40232). May 27 17:51:09.231100 sshd[6019]: Accepted publickey for core from 10.200.16.10 port 40232 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:51:09.232022 sshd-session[6019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:51:09.235601 systemd-logind[1703]: New session 23 of user core. May 27 17:51:09.243653 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 17:51:09.712702 sshd[6021]: Connection closed by 10.200.16.10 port 40232 May 27 17:51:09.713071 sshd-session[6019]: pam_unix(sshd:session): session closed for user core May 27 17:51:09.715457 systemd[1]: sshd@20-10.200.8.19:22-10.200.16.10:40232.service: Deactivated successfully. May 27 17:51:09.717039 systemd[1]: session-23.scope: Deactivated successfully. May 27 17:51:09.717770 systemd-logind[1703]: Session 23 logged out. Waiting for processes to exit. May 27 17:51:09.718889 systemd-logind[1703]: Removed session 23. May 27 17:51:11.113864 containerd[1720]: time="2025-05-27T17:51:11.113809369Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d2f831c2f4f09369ee79be50e1f2095620cefe0506a9b4e1e9526c82bc449ed\" id:\"097ca311329a4bf98d93f86b6f640c4dff5503557cfc324fad491f62a9e9340f\" pid:6044 exited_at:{seconds:1748368271 nanos:113588359}" May 27 17:51:12.052991 containerd[1720]: time="2025-05-27T17:51:12.052941197Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d2f831c2f4f09369ee79be50e1f2095620cefe0506a9b4e1e9526c82bc449ed\" id:\"7b014d3302adfc72e807a9574891b7208813962326ceaac96093187a50f43c58\" pid:6065 exited_at:{seconds:1748368272 nanos:52725860}" May 27 17:51:12.931707 kubelet[3133]: E0527 17:51:12.931519 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7bcdcfc6f8-4mnvt" podUID="0af188af-bed9-4f26-9fd6-cb97993cd253" May 27 17:51:14.823750 systemd[1]: Started sshd@21-10.200.8.19:22-10.200.16.10:40238.service - OpenSSH per-connection server daemon (10.200.16.10:40238). May 27 17:51:15.443976 sshd[6074]: Accepted publickey for core from 10.200.16.10 port 40238 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:51:15.444896 sshd-session[6074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:51:15.448586 systemd-logind[1703]: New session 24 of user core. May 27 17:51:15.451711 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 17:51:15.928463 sshd[6076]: Connection closed by 10.200.16.10 port 40238 May 27 17:51:15.928876 sshd-session[6074]: pam_unix(sshd:session): session closed for user core May 27 17:51:15.930899 systemd[1]: sshd@21-10.200.8.19:22-10.200.16.10:40238.service: Deactivated successfully. May 27 17:51:15.932340 systemd[1]: session-24.scope: Deactivated successfully. May 27 17:51:15.933416 systemd-logind[1703]: Session 24 logged out. Waiting for processes to exit. May 27 17:51:15.934460 systemd-logind[1703]: Removed session 24. May 27 17:51:16.931359 kubelet[3133]: E0527 17:51:16.930373 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-rdk7t" podUID="809f7cb6-fbee-4102-b430-229c080e87f0" May 27 17:51:21.039241 systemd[1]: Started sshd@22-10.200.8.19:22-10.200.16.10:52770.service - OpenSSH per-connection server daemon (10.200.16.10:52770). May 27 17:51:21.659709 sshd[6089]: Accepted publickey for core from 10.200.16.10 port 52770 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:51:21.660681 sshd-session[6089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:51:21.664607 systemd-logind[1703]: New session 25 of user core. May 27 17:51:21.668692 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 17:51:22.141183 sshd[6091]: Connection closed by 10.200.16.10 port 52770 May 27 17:51:22.141740 sshd-session[6089]: pam_unix(sshd:session): session closed for user core May 27 17:51:22.144111 systemd[1]: sshd@22-10.200.8.19:22-10.200.16.10:52770.service: Deactivated successfully. May 27 17:51:22.145627 systemd[1]: session-25.scope: Deactivated successfully. May 27 17:51:22.146234 systemd-logind[1703]: Session 25 logged out. Waiting for processes to exit. May 27 17:51:22.147335 systemd-logind[1703]: Removed session 25. May 27 17:51:25.931334 kubelet[3133]: E0527 17:51:25.931280 3133 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7bcdcfc6f8-4mnvt" podUID="0af188af-bed9-4f26-9fd6-cb97993cd253" May 27 17:51:25.948708 kubelet[3133]: E0527 17:51:25.948590 3133 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: EOF" event="&Event{ObjectMeta:{whisker-7bcdcfc6f8-4mnvt.184373901777ca23 calico-system 1535 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:whisker-7bcdcfc6f8-4mnvt,UID:0af188af-bed9-4f26-9fd6-cb97993cd253,APIVersion:v1,ResourceVersion:889,FieldPath:spec.containers{whisker},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/whisker:v3.30.0\",Source:EventSource{Component:kubelet,Host:ci-4344.0.0-a-92788821a5,},FirstTimestamp:2025-05-27 17:49:03 +0000 UTC,LastTimestamp:2025-05-27 17:51:25.930944086 +0000 UTC m=+183.084424401,Count:9,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.0.0-a-92788821a5,}"