May 27 03:21:39.003295 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 01:09:43 -00 2025 May 27 03:21:39.003332 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:21:39.003345 kernel: BIOS-provided physical RAM map: May 27 03:21:39.003353 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 27 03:21:39.003360 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved May 27 03:21:39.003367 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable May 27 03:21:39.003378 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc4fff] reserved May 27 03:21:39.003385 kernel: BIOS-e820: [mem 0x000000003ffc5000-0x000000003ffd0fff] usable May 27 03:21:39.003392 kernel: BIOS-e820: [mem 0x000000003ffd1000-0x000000003fffafff] ACPI data May 27 03:21:39.003399 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS May 27 03:21:39.003407 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable May 27 03:21:39.003415 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable May 27 03:21:39.003422 kernel: printk: legacy bootconsole [earlyser0] enabled May 27 03:21:39.003429 kernel: NX (Execute Disable) protection: active May 27 03:21:39.003440 kernel: APIC: Static calls initialized May 27 03:21:39.003447 kernel: efi: EFI v2.7 by Microsoft May 27 03:21:39.003455 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ebb9a98 RNG=0x3ffd2018 May 27 03:21:39.003463 kernel: random: crng init done May 27 03:21:39.003470 kernel: secureboot: Secure boot disabled May 27 03:21:39.003478 kernel: SMBIOS 3.1.0 present. May 27 03:21:39.003485 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/21/2024 May 27 03:21:39.003493 kernel: DMI: Memory slots populated: 2/2 May 27 03:21:39.003502 kernel: Hypervisor detected: Microsoft Hyper-V May 27 03:21:39.003510 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 May 27 03:21:39.003518 kernel: Hyper-V: Nested features: 0x3e0101 May 27 03:21:39.003525 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 May 27 03:21:39.003532 kernel: Hyper-V: Using hypercall for remote TLB flush May 27 03:21:39.003540 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 27 03:21:39.003548 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 27 03:21:39.003555 kernel: tsc: Detected 2300.000 MHz processor May 27 03:21:39.003563 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 03:21:39.003572 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 03:21:39.003579 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 May 27 03:21:39.003589 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 27 03:21:39.003597 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 03:21:39.003605 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved May 27 03:21:39.003613 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 May 27 03:21:39.003621 kernel: Using GB pages for direct mapping May 27 03:21:39.003628 kernel: ACPI: Early table checksum verification disabled May 27 03:21:39.003636 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) May 27 03:21:39.003647 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 03:21:39.003657 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 03:21:39.003666 kernel: ACPI: DSDT 0x000000003FFD6000 01E11C (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 27 03:21:39.003674 kernel: ACPI: FACS 0x000000003FFFE000 000040 May 27 03:21:39.003682 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 03:21:39.003690 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 03:21:39.003700 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 03:21:39.003708 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) May 27 03:21:39.003716 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) May 27 03:21:39.003724 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 03:21:39.003732 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] May 27 03:21:39.003740 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff411b] May 27 03:21:39.003749 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] May 27 03:21:39.003756 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] May 27 03:21:39.003764 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] May 27 03:21:39.003774 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] May 27 03:21:39.003782 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] May 27 03:21:39.003790 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] May 27 03:21:39.003798 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] May 27 03:21:39.003806 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] May 27 03:21:39.003814 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] May 27 03:21:39.003822 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] May 27 03:21:39.003831 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] May 27 03:21:39.003838 kernel: Zone ranges: May 27 03:21:39.003848 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 03:21:39.003856 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 27 03:21:39.003864 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] May 27 03:21:39.003872 kernel: Device empty May 27 03:21:39.003880 kernel: Movable zone start for each node May 27 03:21:39.003888 kernel: Early memory node ranges May 27 03:21:39.003895 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 27 03:21:39.003903 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] May 27 03:21:39.003911 kernel: node 0: [mem 0x000000003ffc5000-0x000000003ffd0fff] May 27 03:21:39.003988 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] May 27 03:21:39.003999 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] May 27 03:21:39.004010 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] May 27 03:21:39.004022 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 03:21:39.004032 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 27 03:21:39.004043 kernel: On node 0, zone DMA32: 132 pages in unavailable ranges May 27 03:21:39.004052 kernel: On node 0, zone DMA32: 46 pages in unavailable ranges May 27 03:21:39.004063 kernel: ACPI: PM-Timer IO Port: 0x408 May 27 03:21:39.004073 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 03:21:39.004085 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 03:21:39.004096 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 03:21:39.004106 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 May 27 03:21:39.004116 kernel: TSC deadline timer available May 27 03:21:39.004126 kernel: CPU topo: Max. logical packages: 1 May 27 03:21:39.004137 kernel: CPU topo: Max. logical dies: 1 May 27 03:21:39.004148 kernel: CPU topo: Max. dies per package: 1 May 27 03:21:39.004157 kernel: CPU topo: Max. threads per core: 2 May 27 03:21:39.004168 kernel: CPU topo: Num. cores per package: 1 May 27 03:21:39.004182 kernel: CPU topo: Num. threads per package: 2 May 27 03:21:39.004192 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 27 03:21:39.004202 kernel: [mem 0x40000000-0xffffffff] available for PCI devices May 27 03:21:39.004213 kernel: Booting paravirtualized kernel on Hyper-V May 27 03:21:39.004223 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 03:21:39.004234 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 27 03:21:39.004245 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 27 03:21:39.004254 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 27 03:21:39.004264 kernel: pcpu-alloc: [0] 0 1 May 27 03:21:39.004276 kernel: Hyper-V: PV spinlocks enabled May 27 03:21:39.004286 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 03:21:39.004297 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:21:39.004307 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 03:21:39.004317 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) May 27 03:21:39.004327 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 03:21:39.004336 kernel: Fallback order for Node 0: 0 May 27 03:21:39.004346 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2096877 May 27 03:21:39.004357 kernel: Policy zone: Normal May 27 03:21:39.004366 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 03:21:39.004375 kernel: software IO TLB: area num 2. May 27 03:21:39.004384 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 03:21:39.004393 kernel: ftrace: allocating 40081 entries in 157 pages May 27 03:21:39.004402 kernel: ftrace: allocated 157 pages with 5 groups May 27 03:21:39.004412 kernel: Dynamic Preempt: voluntary May 27 03:21:39.004422 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 03:21:39.004432 kernel: rcu: RCU event tracing is enabled. May 27 03:21:39.004444 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 03:21:39.004462 kernel: Trampoline variant of Tasks RCU enabled. May 27 03:21:39.004471 kernel: Rude variant of Tasks RCU enabled. May 27 03:21:39.004484 kernel: Tracing variant of Tasks RCU enabled. May 27 03:21:39.004494 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 03:21:39.004503 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 03:21:39.004513 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:21:39.004521 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:21:39.004532 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:21:39.004542 kernel: Using NULL legacy PIC May 27 03:21:39.004552 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 May 27 03:21:39.004565 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 03:21:39.004574 kernel: Console: colour dummy device 80x25 May 27 03:21:39.004584 kernel: printk: legacy console [tty1] enabled May 27 03:21:39.004594 kernel: printk: legacy console [ttyS0] enabled May 27 03:21:39.004604 kernel: printk: legacy bootconsole [earlyser0] disabled May 27 03:21:39.004613 kernel: ACPI: Core revision 20240827 May 27 03:21:39.004627 kernel: Failed to register legacy timer interrupt May 27 03:21:39.004636 kernel: APIC: Switch to symmetric I/O mode setup May 27 03:21:39.004646 kernel: x2apic enabled May 27 03:21:39.004656 kernel: APIC: Switched APIC routing to: physical x2apic May 27 03:21:39.004666 kernel: Hyper-V: Host Build 10.0.26100.1221-1-0 May 27 03:21:39.004675 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 27 03:21:39.004686 kernel: Hyper-V: Disabling IBT because of Hyper-V bug May 27 03:21:39.004696 kernel: Hyper-V: Using IPI hypercalls May 27 03:21:39.004704 kernel: APIC: send_IPI() replaced with hv_send_ipi() May 27 03:21:39.004717 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() May 27 03:21:39.004727 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() May 27 03:21:39.004737 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() May 27 03:21:39.004747 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() May 27 03:21:39.004756 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() May 27 03:21:39.004766 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns May 27 03:21:39.004775 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) May 27 03:21:39.004785 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 27 03:21:39.004795 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 27 03:21:39.004812 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 27 03:21:39.004819 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 03:21:39.004827 kernel: Spectre V2 : Mitigation: Retpolines May 27 03:21:39.004834 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 03:21:39.004842 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 27 03:21:39.004850 kernel: RETBleed: Vulnerable May 27 03:21:39.004858 kernel: Speculative Store Bypass: Vulnerable May 27 03:21:39.004865 kernel: ITS: Mitigation: Aligned branch/return thunks May 27 03:21:39.004873 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 03:21:39.004880 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 03:21:39.004887 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 03:21:39.004897 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 27 03:21:39.004904 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 27 03:21:39.004911 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 27 03:21:39.004929 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' May 27 03:21:39.004937 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' May 27 03:21:39.004944 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' May 27 03:21:39.004951 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 03:21:39.004959 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 May 27 03:21:39.004965 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 May 27 03:21:39.004972 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 May 27 03:21:39.004981 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 May 27 03:21:39.004989 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 May 27 03:21:39.004996 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 May 27 03:21:39.005004 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. May 27 03:21:39.005011 kernel: Freeing SMP alternatives memory: 32K May 27 03:21:39.005019 kernel: pid_max: default: 32768 minimum: 301 May 27 03:21:39.005026 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 03:21:39.005033 kernel: landlock: Up and running. May 27 03:21:39.005040 kernel: SELinux: Initializing. May 27 03:21:39.005047 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 27 03:21:39.005054 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 27 03:21:39.005062 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) May 27 03:21:39.005070 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. May 27 03:21:39.005078 kernel: signal: max sigframe size: 11952 May 27 03:21:39.005086 kernel: rcu: Hierarchical SRCU implementation. May 27 03:21:39.005093 kernel: rcu: Max phase no-delay instances is 400. May 27 03:21:39.005099 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 03:21:39.005106 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 27 03:21:39.005112 kernel: smp: Bringing up secondary CPUs ... May 27 03:21:39.005118 kernel: smpboot: x86: Booting SMP configuration: May 27 03:21:39.005125 kernel: .... node #0, CPUs: #1 May 27 03:21:39.005133 kernel: smp: Brought up 1 node, 2 CPUs May 27 03:21:39.005140 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) May 27 03:21:39.005146 kernel: Memory: 8082312K/8387508K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 299988K reserved, 0K cma-reserved) May 27 03:21:39.005153 kernel: devtmpfs: initialized May 27 03:21:39.005159 kernel: x86/mm: Memory block size: 128MB May 27 03:21:39.005166 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) May 27 03:21:39.005173 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 03:21:39.005180 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 03:21:39.005187 kernel: pinctrl core: initialized pinctrl subsystem May 27 03:21:39.005205 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 03:21:39.005213 kernel: audit: initializing netlink subsys (disabled) May 27 03:21:39.005226 kernel: audit: type=2000 audit(1748316095.030:1): state=initialized audit_enabled=0 res=1 May 27 03:21:39.005234 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 03:21:39.005242 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 03:21:39.005250 kernel: cpuidle: using governor menu May 27 03:21:39.005258 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 03:21:39.005266 kernel: dca service started, version 1.12.1 May 27 03:21:39.005274 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] May 27 03:21:39.005285 kernel: e820: reserve RAM buffer [mem 0x3ffd1000-0x3fffffff] May 27 03:21:39.005293 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 03:21:39.005304 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 03:21:39.005313 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 03:21:39.005320 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 03:21:39.005328 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 03:21:39.005336 kernel: ACPI: Added _OSI(Module Device) May 27 03:21:39.005344 kernel: ACPI: Added _OSI(Processor Device) May 27 03:21:39.005352 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 03:21:39.005362 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 03:21:39.005369 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 03:21:39.005377 kernel: ACPI: Interpreter enabled May 27 03:21:39.005385 kernel: ACPI: PM: (supports S0 S5) May 27 03:21:39.005393 kernel: ACPI: Using IOAPIC for interrupt routing May 27 03:21:39.005401 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 03:21:39.005409 kernel: PCI: Ignoring E820 reservations for host bridge windows May 27 03:21:39.005417 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F May 27 03:21:39.005424 kernel: iommu: Default domain type: Translated May 27 03:21:39.005434 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 03:21:39.005442 kernel: efivars: Registered efivars operations May 27 03:21:39.005449 kernel: PCI: Using ACPI for IRQ routing May 27 03:21:39.005457 kernel: PCI: System does not support PCI May 27 03:21:39.005465 kernel: vgaarb: loaded May 27 03:21:39.005473 kernel: clocksource: Switched to clocksource tsc-early May 27 03:21:39.005481 kernel: VFS: Disk quotas dquot_6.6.0 May 27 03:21:39.005489 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 03:21:39.005496 kernel: pnp: PnP ACPI init May 27 03:21:39.005506 kernel: pnp: PnP ACPI: found 3 devices May 27 03:21:39.005514 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 03:21:39.005522 kernel: NET: Registered PF_INET protocol family May 27 03:21:39.005530 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 27 03:21:39.005538 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) May 27 03:21:39.005546 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 03:21:39.005553 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 03:21:39.005561 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 27 03:21:39.005569 kernel: TCP: Hash tables configured (established 65536 bind 65536) May 27 03:21:39.005578 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) May 27 03:21:39.005586 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) May 27 03:21:39.005594 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 03:21:39.005602 kernel: NET: Registered PF_XDP protocol family May 27 03:21:39.005610 kernel: PCI: CLS 0 bytes, default 64 May 27 03:21:39.005617 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 27 03:21:39.005625 kernel: software IO TLB: mapped [mem 0x000000003aa59000-0x000000003ea59000] (64MB) May 27 03:21:39.005633 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer May 27 03:21:39.005641 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules May 27 03:21:39.005651 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns May 27 03:21:39.005659 kernel: clocksource: Switched to clocksource tsc May 27 03:21:39.005667 kernel: Initialise system trusted keyrings May 27 03:21:39.005675 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 May 27 03:21:39.005683 kernel: Key type asymmetric registered May 27 03:21:39.005690 kernel: Asymmetric key parser 'x509' registered May 27 03:21:39.005698 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 03:21:39.005706 kernel: io scheduler mq-deadline registered May 27 03:21:39.005714 kernel: io scheduler kyber registered May 27 03:21:39.005724 kernel: io scheduler bfq registered May 27 03:21:39.005732 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 03:21:39.005740 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 03:21:39.005748 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 03:21:39.005756 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 27 03:21:39.005763 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A May 27 03:21:39.005771 kernel: i8042: PNP: No PS/2 controller found. May 27 03:21:39.005900 kernel: rtc_cmos 00:02: registered as rtc0 May 27 03:21:39.006206 kernel: rtc_cmos 00:02: setting system clock to 2025-05-27T03:21:38 UTC (1748316098) May 27 03:21:39.006276 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram May 27 03:21:39.006286 kernel: intel_pstate: Intel P-state driver initializing May 27 03:21:39.006294 kernel: efifb: probing for efifb May 27 03:21:39.006303 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 27 03:21:39.006311 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 27 03:21:39.006319 kernel: efifb: scrolling: redraw May 27 03:21:39.006327 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 27 03:21:39.006338 kernel: Console: switching to colour frame buffer device 128x48 May 27 03:21:39.006346 kernel: fb0: EFI VGA frame buffer device May 27 03:21:39.006354 kernel: pstore: Using crash dump compression: deflate May 27 03:21:39.006362 kernel: pstore: Registered efi_pstore as persistent store backend May 27 03:21:39.006370 kernel: NET: Registered PF_INET6 protocol family May 27 03:21:39.006378 kernel: Segment Routing with IPv6 May 27 03:21:39.006386 kernel: In-situ OAM (IOAM) with IPv6 May 27 03:21:39.006394 kernel: NET: Registered PF_PACKET protocol family May 27 03:21:39.006403 kernel: Key type dns_resolver registered May 27 03:21:39.006412 kernel: IPI shorthand broadcast: enabled May 27 03:21:39.006420 kernel: sched_clock: Marking stable (2723043094, 85193190)->(3092734364, -284498080) May 27 03:21:39.006428 kernel: registered taskstats version 1 May 27 03:21:39.006437 kernel: Loading compiled-in X.509 certificates May 27 03:21:39.006444 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: ba9eddccb334a70147f3ddfe4fbde029feaa991d' May 27 03:21:39.006453 kernel: Demotion targets for Node 0: null May 27 03:21:39.006461 kernel: Key type .fscrypt registered May 27 03:21:39.006469 kernel: Key type fscrypt-provisioning registered May 27 03:21:39.006477 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 03:21:39.006487 kernel: ima: Allocated hash algorithm: sha1 May 27 03:21:39.006495 kernel: ima: No architecture policies found May 27 03:21:39.006503 kernel: clk: Disabling unused clocks May 27 03:21:39.006511 kernel: Warning: unable to open an initial console. May 27 03:21:39.006519 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 03:21:39.006527 kernel: Write protecting the kernel read-only data: 24576k May 27 03:21:39.006536 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 03:21:39.006544 kernel: Run /init as init process May 27 03:21:39.006551 kernel: with arguments: May 27 03:21:39.006561 kernel: /init May 27 03:21:39.006569 kernel: with environment: May 27 03:21:39.006576 kernel: HOME=/ May 27 03:21:39.006585 kernel: TERM=linux May 27 03:21:39.006592 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 03:21:39.006602 systemd[1]: Successfully made /usr/ read-only. May 27 03:21:39.006614 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:21:39.006623 systemd[1]: Detected virtualization microsoft. May 27 03:21:39.006633 systemd[1]: Detected architecture x86-64. May 27 03:21:39.006642 systemd[1]: Running in initrd. May 27 03:21:39.006650 systemd[1]: No hostname configured, using default hostname. May 27 03:21:39.006659 systemd[1]: Hostname set to . May 27 03:21:39.006668 systemd[1]: Initializing machine ID from random generator. May 27 03:21:39.006676 systemd[1]: Queued start job for default target initrd.target. May 27 03:21:39.006684 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:21:39.006693 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:21:39.006704 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 03:21:39.006713 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:21:39.006722 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 03:21:39.006731 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 03:21:39.006741 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 03:21:39.006750 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 03:21:39.006760 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:21:39.006769 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:21:39.006777 systemd[1]: Reached target paths.target - Path Units. May 27 03:21:39.006786 systemd[1]: Reached target slices.target - Slice Units. May 27 03:21:39.006795 systemd[1]: Reached target swap.target - Swaps. May 27 03:21:39.006804 systemd[1]: Reached target timers.target - Timer Units. May 27 03:21:39.006812 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:21:39.006821 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:21:39.006829 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 03:21:39.006840 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 03:21:39.006849 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:21:39.006857 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:21:39.006866 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:21:39.006875 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:21:39.006883 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 03:21:39.006892 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:21:39.006900 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 03:21:39.006909 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 03:21:39.006931 systemd[1]: Starting systemd-fsck-usr.service... May 27 03:21:39.006940 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:21:39.006948 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:21:39.006967 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:21:39.006978 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 03:21:39.006989 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:21:39.006998 systemd[1]: Finished systemd-fsck-usr.service. May 27 03:21:39.007007 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:21:39.007031 systemd-journald[205]: Collecting audit messages is disabled. May 27 03:21:39.007058 systemd-journald[205]: Journal started May 27 03:21:39.007080 systemd-journald[205]: Runtime Journal (/run/log/journal/3c162041c2f34aa98e7a463b5ba0860f) is 8M, max 159M, 151M free. May 27 03:21:39.011607 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:21:39.014494 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 03:21:38.982729 systemd-modules-load[206]: Inserted module 'overlay' May 27 03:21:39.010958 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:21:39.022059 kernel: Bridge firewalling registered May 27 03:21:39.016624 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 03:21:39.022814 systemd-modules-load[206]: Inserted module 'br_netfilter' May 27 03:21:39.023454 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:21:39.028527 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:21:39.035415 systemd-tmpfiles[218]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 03:21:39.037523 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:21:39.039186 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:21:39.042045 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:21:39.044016 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:21:39.059084 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:21:39.061730 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:21:39.066172 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:21:39.069524 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 03:21:39.073467 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:21:39.090630 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:21:39.115709 systemd-resolved[245]: Positive Trust Anchors: May 27 03:21:39.116958 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:21:39.117042 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:21:39.121870 systemd-resolved[245]: Defaulting to hostname 'linux'. May 27 03:21:39.135255 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:21:39.138957 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:21:39.155933 kernel: SCSI subsystem initialized May 27 03:21:39.162931 kernel: Loading iSCSI transport class v2.0-870. May 27 03:21:39.170937 kernel: iscsi: registered transport (tcp) May 27 03:21:39.186198 kernel: iscsi: registered transport (qla4xxx) May 27 03:21:39.186245 kernel: QLogic iSCSI HBA Driver May 27 03:21:39.198022 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:21:39.213627 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:21:39.214127 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:21:39.242356 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 03:21:39.245677 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 03:21:39.282933 kernel: raid6: avx512x4 gen() 34205 MB/s May 27 03:21:39.299932 kernel: raid6: avx512x2 gen() 34376 MB/s May 27 03:21:39.317927 kernel: raid6: avx512x1 gen() 29690 MB/s May 27 03:21:39.334927 kernel: raid6: avx2x4 gen() 32094 MB/s May 27 03:21:39.352926 kernel: raid6: avx2x2 gen() 32787 MB/s May 27 03:21:39.370548 kernel: raid6: avx2x1 gen() 20459 MB/s May 27 03:21:39.370580 kernel: raid6: using algorithm avx512x2 gen() 34376 MB/s May 27 03:21:39.388511 kernel: raid6: .... xor() 36481 MB/s, rmw enabled May 27 03:21:39.388531 kernel: raid6: using avx512x2 recovery algorithm May 27 03:21:39.404933 kernel: xor: automatically using best checksumming function avx May 27 03:21:39.506933 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 03:21:39.510865 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 03:21:39.512063 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:21:39.527979 systemd-udevd[454]: Using default interface naming scheme 'v255'. May 27 03:21:39.531470 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:21:39.536989 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 03:21:39.550022 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation May 27 03:21:39.565542 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:21:39.566416 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:21:39.596316 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:21:39.601107 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 03:21:39.644932 kernel: cryptd: max_cpu_qlen set to 1000 May 27 03:21:39.647930 kernel: hv_vmbus: Vmbus version:5.3 May 27 03:21:39.662821 kernel: AES CTR mode by8 optimization enabled May 27 03:21:39.672135 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:21:39.673959 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:21:39.682024 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:21:39.688485 kernel: pps_core: LinuxPPS API ver. 1 registered May 27 03:21:39.688518 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 27 03:21:39.691435 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:21:39.693115 kernel: hv_vmbus: registering driver hv_netvsc May 27 03:21:39.700237 kernel: hv_vmbus: registering driver hyperv_keyboard May 27 03:21:39.746812 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:21:39.750019 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 27 03:21:39.746882 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:21:39.758932 kernel: hv_vmbus: registering driver hv_pci May 27 03:21:39.757010 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:21:39.764127 kernel: PTP clock support registered May 27 03:21:39.780210 kernel: hid: raw HID events driver (C) Jiri Kosina May 27 03:21:39.789720 kernel: hv_vmbus: registering driver hid_hyperv May 27 03:21:39.789752 kernel: hv_netvsc f8615163-0000-1000-2000-000d3adbd577 (unnamed net_device) (uninitialized): VF slot 1 added May 27 03:21:39.793938 kernel: hv_vmbus: registering driver hv_storvsc May 27 03:21:39.795588 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:21:39.803129 kernel: hv_utils: Registering HyperV Utility Driver May 27 03:21:39.803156 kernel: hv_vmbus: registering driver hv_utils May 27 03:21:39.803172 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 May 27 03:21:39.803411 kernel: hv_utils: Shutdown IC version 3.2 May 27 03:21:39.808782 kernel: hv_utils: Heartbeat IC version 3.0 May 27 03:21:39.808812 kernel: hv_utils: TimeSync IC version 4.0 May 27 03:21:39.789576 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 27 03:21:39.793233 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 27 03:21:39.793342 systemd-journald[205]: Time jumped backwards, rotating. May 27 03:21:39.790909 systemd-resolved[245]: Clock change detected. Flushing caches. May 27 03:21:39.797937 kernel: scsi host0: storvsc_host_t May 27 03:21:39.798077 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 May 27 03:21:39.804342 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 May 27 03:21:39.804844 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] May 27 03:21:39.807678 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] May 27 03:21:39.812891 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint May 27 03:21:39.822103 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] May 27 03:21:39.830007 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) May 27 03:21:39.830045 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 27 03:21:39.831892 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 May 27 03:21:39.831990 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 27 03:21:39.834061 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned May 27 03:21:39.835013 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 27 03:21:39.848831 kernel: nvme nvme0: pci function c05b:00:00.0 May 27 03:21:39.848996 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) May 27 03:21:39.853928 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 27 03:21:39.868899 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#190 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 27 03:21:40.080933 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 27 03:21:40.103082 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 03:21:40.167900 kernel: nvme nvme0: using unchecked data buffer May 27 03:21:40.218895 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. May 27 03:21:40.232131 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 03:21:40.243453 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. May 27 03:21:40.256331 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. May 27 03:21:40.263539 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. May 27 03:21:40.264360 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. May 27 03:21:40.268909 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:21:40.275558 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:21:40.279047 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:21:40.286385 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 03:21:40.292993 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 03:21:40.303922 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 03:21:40.309929 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 03:21:40.819103 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 May 27 03:21:40.819321 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 May 27 03:21:40.821687 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] May 27 03:21:40.823055 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] May 27 03:21:40.826986 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint May 27 03:21:40.830887 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] May 27 03:21:40.834917 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] May 27 03:21:40.836933 kernel: pci 7870:00:00.0: enabling Extended Tags May 27 03:21:40.850907 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 May 27 03:21:40.851075 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned May 27 03:21:40.853924 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned May 27 03:21:40.857795 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) May 27 03:21:40.867829 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 May 27 03:21:40.868075 kernel: hv_netvsc f8615163-0000-1000-2000-000d3adbd577 eth0: VF registering: eth1 May 27 03:21:40.869337 kernel: mana 7870:00:00.0 eth1: joined to eth0 May 27 03:21:40.872897 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 May 27 03:21:41.320895 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 03:21:41.322032 disk-uuid[677]: The operation has completed successfully. May 27 03:21:41.373324 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 03:21:41.373408 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 03:21:41.408697 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 03:21:41.425809 sh[720]: Success May 27 03:21:41.444356 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 03:21:41.444393 kernel: device-mapper: uevent: version 1.0.3 May 27 03:21:41.445406 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 03:21:41.453900 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 27 03:21:41.522584 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 03:21:41.526956 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 03:21:41.541599 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 03:21:41.552905 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 03:21:41.552937 kernel: BTRFS: device fsid f0f66fe8-3990-49eb-980e-559a3dfd3522 devid 1 transid 40 /dev/mapper/usr (254:0) scanned by mount (733) May 27 03:21:41.556303 kernel: BTRFS info (device dm-0): first mount of filesystem f0f66fe8-3990-49eb-980e-559a3dfd3522 May 27 03:21:41.557411 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 03:21:41.557888 kernel: BTRFS info (device dm-0): using free-space-tree May 27 03:21:41.620014 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 03:21:41.622150 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 03:21:41.625678 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 03:21:41.628435 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 03:21:41.640513 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 03:21:41.661902 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (756) May 27 03:21:41.665900 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:21:41.665933 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 27 03:21:41.665944 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 03:21:41.681919 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:21:41.682454 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 03:21:41.684808 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 03:21:41.714049 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:21:41.716658 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:21:41.743911 systemd-networkd[902]: lo: Link UP May 27 03:21:41.748964 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 May 27 03:21:41.743920 systemd-networkd[902]: lo: Gained carrier May 27 03:21:41.745452 systemd-networkd[902]: Enumeration completed May 27 03:21:41.745510 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:21:41.745849 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:21:41.760234 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 May 27 03:21:41.760409 kernel: hv_netvsc f8615163-0000-1000-2000-000d3adbd577 eth0: Data path switched to VF: enP30832s1 May 27 03:21:41.745852 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:21:41.754366 systemd[1]: Reached target network.target - Network. May 27 03:21:41.756962 systemd-networkd[902]: enP30832s1: Link UP May 27 03:21:41.757027 systemd-networkd[902]: eth0: Link UP May 27 03:21:41.757426 systemd-networkd[902]: eth0: Gained carrier May 27 03:21:41.757435 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:21:41.762231 systemd-networkd[902]: enP30832s1: Gained carrier May 27 03:21:41.768912 systemd-networkd[902]: eth0: DHCPv4 address 10.200.8.16/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 27 03:21:41.889761 ignition[841]: Ignition 2.21.0 May 27 03:21:41.889776 ignition[841]: Stage: fetch-offline May 27 03:21:41.889972 ignition[841]: no configs at "/usr/lib/ignition/base.d" May 27 03:21:41.889979 ignition[841]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 03:21:41.893103 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:21:41.890092 ignition[841]: parsed url from cmdline: "" May 27 03:21:41.897655 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 03:21:41.890095 ignition[841]: no config URL provided May 27 03:21:41.890100 ignition[841]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:21:41.890108 ignition[841]: no config at "/usr/lib/ignition/user.ign" May 27 03:21:41.890113 ignition[841]: failed to fetch config: resource requires networking May 27 03:21:41.890301 ignition[841]: Ignition finished successfully May 27 03:21:41.922093 ignition[913]: Ignition 2.21.0 May 27 03:21:41.922102 ignition[913]: Stage: fetch May 27 03:21:41.922447 ignition[913]: no configs at "/usr/lib/ignition/base.d" May 27 03:21:41.922455 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 03:21:41.922539 ignition[913]: parsed url from cmdline: "" May 27 03:21:41.922542 ignition[913]: no config URL provided May 27 03:21:41.922547 ignition[913]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:21:41.922553 ignition[913]: no config at "/usr/lib/ignition/user.ign" May 27 03:21:41.922588 ignition[913]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 27 03:21:43.033570 ignition[913]: GET result: OK May 27 03:21:43.033691 ignition[913]: config has been read from IMDS userdata May 27 03:21:43.033728 ignition[913]: parsing config with SHA512: e4856ec2a5606e6a0a6bdb5237f16018b1ce693e68d13148ea41bf0dbc2c96ef8cb4221eb81e5f412432f0a35f182d080a5c6adc02bb62821e6bd7b937e1ac53 May 27 03:21:43.041094 unknown[913]: fetched base config from "system" May 27 03:21:43.041104 unknown[913]: fetched base config from "system" May 27 03:21:43.041109 unknown[913]: fetched user config from "azure" May 27 03:21:43.046380 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 03:21:43.044086 ignition[913]: fetch: fetch complete May 27 03:21:43.050038 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 03:21:43.044090 ignition[913]: fetch: fetch passed May 27 03:21:43.044180 ignition[913]: Ignition finished successfully May 27 03:21:43.072385 ignition[920]: Ignition 2.21.0 May 27 03:21:43.072396 ignition[920]: Stage: kargs May 27 03:21:43.072613 ignition[920]: no configs at "/usr/lib/ignition/base.d" May 27 03:21:43.074779 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 03:21:43.072621 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 03:21:43.078150 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 03:21:43.073772 ignition[920]: kargs: kargs passed May 27 03:21:43.073811 ignition[920]: Ignition finished successfully May 27 03:21:43.098655 ignition[927]: Ignition 2.21.0 May 27 03:21:43.098664 ignition[927]: Stage: disks May 27 03:21:43.098846 ignition[927]: no configs at "/usr/lib/ignition/base.d" May 27 03:21:43.101245 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 03:21:43.098853 ignition[927]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 03:21:43.104863 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 03:21:43.100317 ignition[927]: disks: disks passed May 27 03:21:43.107526 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 03:21:43.100366 ignition[927]: Ignition finished successfully May 27 03:21:43.111150 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:21:43.116313 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:21:43.123169 systemd[1]: Reached target basic.target - Basic System. May 27 03:21:43.128546 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 03:21:43.155010 systemd-fsck[936]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks May 27 03:21:43.157946 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 03:21:43.162326 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 03:21:43.298772 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 03:21:43.301725 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 18301365-b380-45d7-9677-e42472a122bc r/w with ordered data mode. Quota mode: none. May 27 03:21:43.299198 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 03:21:43.303137 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:21:43.305957 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 03:21:43.313638 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 27 03:21:43.317405 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 03:21:43.317465 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:21:43.321402 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 03:21:43.326597 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 03:21:43.333649 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (946) May 27 03:21:43.333776 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:21:43.337301 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 27 03:21:43.337329 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 03:21:43.345658 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:21:43.456616 coreos-metadata[948]: May 27 03:21:43.456 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 27 03:21:43.461237 coreos-metadata[948]: May 27 03:21:43.461 INFO Fetch successful May 27 03:21:43.463965 coreos-metadata[948]: May 27 03:21:43.461 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 27 03:21:43.470384 coreos-metadata[948]: May 27 03:21:43.470 INFO Fetch successful May 27 03:21:43.474958 coreos-metadata[948]: May 27 03:21:43.474 INFO wrote hostname ci-4344.0.0-a-98ca04e8ee to /sysroot/etc/hostname May 27 03:21:43.476698 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 27 03:21:43.486491 initrd-setup-root[976]: cut: /sysroot/etc/passwd: No such file or directory May 27 03:21:43.496091 initrd-setup-root[983]: cut: /sysroot/etc/group: No such file or directory May 27 03:21:43.502619 initrd-setup-root[990]: cut: /sysroot/etc/shadow: No such file or directory May 27 03:21:43.506505 initrd-setup-root[997]: cut: /sysroot/etc/gshadow: No such file or directory May 27 03:21:43.614066 systemd-networkd[902]: enP30832s1: Gained IPv6LL May 27 03:21:43.703564 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 03:21:43.706679 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 03:21:43.710357 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 03:21:43.721831 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 03:21:43.727099 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:21:43.741696 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 03:21:43.742968 systemd-networkd[902]: eth0: Gained IPv6LL May 27 03:21:43.748304 ignition[1067]: INFO : Ignition 2.21.0 May 27 03:21:43.748304 ignition[1067]: INFO : Stage: mount May 27 03:21:43.754975 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:21:43.754975 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 03:21:43.754975 ignition[1067]: INFO : mount: mount passed May 27 03:21:43.754975 ignition[1067]: INFO : Ignition finished successfully May 27 03:21:43.751070 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 03:21:43.753393 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 03:21:44.301183 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:21:44.326948 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (1079) May 27 03:21:44.329176 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:21:44.329217 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 27 03:21:44.330075 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 03:21:44.337083 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:21:44.359872 ignition[1096]: INFO : Ignition 2.21.0 May 27 03:21:44.359872 ignition[1096]: INFO : Stage: files May 27 03:21:44.364766 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:21:44.364766 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 03:21:44.364766 ignition[1096]: DEBUG : files: compiled without relabeling support, skipping May 27 03:21:44.364766 ignition[1096]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 03:21:44.364766 ignition[1096]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 03:21:44.378798 ignition[1096]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 03:21:44.378798 ignition[1096]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 03:21:44.378798 ignition[1096]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 03:21:44.376982 unknown[1096]: wrote ssh authorized keys file for user: core May 27 03:21:44.387934 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 03:21:44.387934 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 27 03:21:44.448564 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 03:21:44.616494 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 03:21:44.616494 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 03:21:44.623944 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 03:21:44.623944 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 03:21:44.623944 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 03:21:44.623944 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:21:44.623944 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:21:44.623944 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:21:44.623944 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:21:44.623944 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:21:44.623944 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:21:44.623944 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:21:44.650905 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:21:44.650905 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:21:44.650905 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 27 03:21:45.420238 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 03:21:46.047711 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:21:46.047711 ignition[1096]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 03:21:46.058422 ignition[1096]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:21:46.064077 ignition[1096]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:21:46.064077 ignition[1096]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 03:21:46.067872 ignition[1096]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 27 03:21:46.067872 ignition[1096]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 27 03:21:46.074530 ignition[1096]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 03:21:46.074530 ignition[1096]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 03:21:46.074530 ignition[1096]: INFO : files: files passed May 27 03:21:46.074530 ignition[1096]: INFO : Ignition finished successfully May 27 03:21:46.071737 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 03:21:46.077586 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 03:21:46.088738 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 03:21:46.092250 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 03:21:46.092332 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 03:21:46.107035 initrd-setup-root-after-ignition[1125]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:21:46.110967 initrd-setup-root-after-ignition[1125]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 03:21:46.113373 initrd-setup-root-after-ignition[1129]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:21:46.111549 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:21:46.120105 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 03:21:46.125113 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 03:21:46.160330 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 03:21:46.160413 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 03:21:46.162861 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 03:21:46.165254 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 03:21:46.167203 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 03:21:46.167788 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 03:21:46.192154 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:21:46.193212 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 03:21:46.209074 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 03:21:46.209386 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:21:46.213370 systemd[1]: Stopped target timers.target - Timer Units. May 27 03:21:46.217187 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 03:21:46.217293 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:21:46.222918 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 03:21:46.226031 systemd[1]: Stopped target basic.target - Basic System. May 27 03:21:46.228784 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 03:21:46.233021 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:21:46.237000 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 03:21:46.239660 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 03:21:46.242312 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 03:21:46.247019 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:21:46.251019 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 03:21:46.255021 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 03:21:46.258004 systemd[1]: Stopped target swap.target - Swaps. May 27 03:21:46.260976 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 03:21:46.261090 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 03:21:46.266964 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 03:21:46.271024 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:21:46.275981 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 03:21:46.276396 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:21:46.279041 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 03:21:46.282222 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 03:21:46.288141 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 03:21:46.288255 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:21:46.292749 systemd[1]: ignition-files.service: Deactivated successfully. May 27 03:21:46.292851 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 03:21:46.296895 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 27 03:21:46.297025 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 27 03:21:46.303403 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 03:21:46.310088 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 03:21:46.312675 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 03:21:46.312820 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:21:46.320302 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 03:21:46.320402 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:21:46.330401 ignition[1149]: INFO : Ignition 2.21.0 May 27 03:21:46.330401 ignition[1149]: INFO : Stage: umount May 27 03:21:46.330401 ignition[1149]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:21:46.330401 ignition[1149]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 03:21:46.330320 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 03:21:46.340657 ignition[1149]: INFO : umount: umount passed May 27 03:21:46.340657 ignition[1149]: INFO : Ignition finished successfully May 27 03:21:46.333984 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 03:21:46.339170 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 03:21:46.339241 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 03:21:46.346567 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 03:21:46.346613 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 03:21:46.352554 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 03:21:46.353418 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 03:21:46.355327 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 03:21:46.357708 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 03:21:46.361890 systemd[1]: Stopped target network.target - Network. May 27 03:21:46.363464 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 03:21:46.364310 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:21:46.368937 systemd[1]: Stopped target paths.target - Path Units. May 27 03:21:46.370679 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 03:21:46.372253 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:21:46.376750 systemd[1]: Stopped target slices.target - Slice Units. May 27 03:21:46.377066 systemd[1]: Stopped target sockets.target - Socket Units. May 27 03:21:46.383637 systemd[1]: iscsid.socket: Deactivated successfully. May 27 03:21:46.383675 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:21:46.391936 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 03:21:46.391975 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:21:46.394135 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 03:21:46.394183 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 03:21:46.395566 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 03:21:46.395599 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 03:21:46.396142 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 03:21:46.396354 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 03:21:46.403832 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 03:21:46.403947 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 03:21:46.409456 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 03:21:46.409587 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 03:21:46.409653 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 03:21:46.414187 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 03:21:46.415966 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 03:21:46.427953 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 03:21:46.427996 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 03:21:46.428767 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 03:21:46.428914 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 03:21:46.428953 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:21:46.429251 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 03:21:46.429282 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 03:21:46.445555 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 03:21:46.445603 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 03:21:46.447384 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 03:21:46.447417 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:21:46.449920 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:21:46.454163 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 03:21:46.454240 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 03:21:46.472006 kernel: hv_netvsc f8615163-0000-1000-2000-000d3adbd577 eth0: Data path switched from VF: enP30832s1 May 27 03:21:46.472153 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 May 27 03:21:46.454278 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 03:21:46.475080 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 03:21:46.475167 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 03:21:46.479100 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 03:21:46.479212 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:21:46.483347 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 03:21:46.483413 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 03:21:46.486953 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 03:21:46.486982 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:21:46.489458 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 03:21:46.489499 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 03:21:46.494351 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 03:21:46.494399 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 03:21:46.495376 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 03:21:46.495412 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:21:46.497976 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 03:21:46.498147 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 03:21:46.498197 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:21:46.502325 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 03:21:46.502375 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:21:46.502640 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 03:21:46.502681 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:21:46.512154 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 03:21:46.512204 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:21:46.517073 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:21:46.517115 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:21:46.523181 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 03:21:46.523225 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 27 03:21:46.523254 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 03:21:46.523285 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:21:46.523525 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 03:21:46.523590 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 03:21:46.936408 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 03:21:46.936534 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 03:21:46.941264 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 03:21:46.943642 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 03:21:46.945015 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 03:21:46.948466 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 03:21:46.982235 systemd[1]: Switching root. May 27 03:21:47.024382 systemd-journald[205]: Journal stopped May 27 03:21:48.708470 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). May 27 03:21:48.708507 kernel: SELinux: policy capability network_peer_controls=1 May 27 03:21:48.708519 kernel: SELinux: policy capability open_perms=1 May 27 03:21:48.708529 kernel: SELinux: policy capability extended_socket_class=1 May 27 03:21:48.708538 kernel: SELinux: policy capability always_check_network=0 May 27 03:21:48.708547 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 03:21:48.708558 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 03:21:48.708568 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 03:21:48.708577 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 03:21:48.708586 kernel: SELinux: policy capability userspace_initial_context=0 May 27 03:21:48.708595 kernel: audit: type=1403 audit(1748316107.608:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 03:21:48.708606 systemd[1]: Successfully loaded SELinux policy in 56.811ms. May 27 03:21:48.708617 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.343ms. May 27 03:21:48.708630 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:21:48.708641 systemd[1]: Detected virtualization microsoft. May 27 03:21:48.708651 systemd[1]: Detected architecture x86-64. May 27 03:21:48.708662 systemd[1]: Detected first boot. May 27 03:21:48.708672 systemd[1]: Hostname set to . May 27 03:21:48.708684 systemd[1]: Initializing machine ID from random generator. May 27 03:21:48.708694 zram_generator::config[1192]: No configuration found. May 27 03:21:48.708705 kernel: Guest personality initialized and is inactive May 27 03:21:48.708715 kernel: VMCI host device registered (name=vmci, major=10, minor=124) May 27 03:21:48.708724 kernel: Initialized host personality May 27 03:21:48.708733 kernel: NET: Registered PF_VSOCK protocol family May 27 03:21:48.708743 systemd[1]: Populated /etc with preset unit settings. May 27 03:21:48.708755 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 03:21:48.708765 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 03:21:48.708775 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 03:21:48.708785 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 03:21:48.708795 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 03:21:48.708806 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 03:21:48.708816 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 03:21:48.708827 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 03:21:48.708837 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 03:21:48.708847 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 03:21:48.708858 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 03:21:48.708868 systemd[1]: Created slice user.slice - User and Session Slice. May 27 03:21:48.708895 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:21:48.708907 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:21:48.708917 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 03:21:48.708930 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 03:21:48.708943 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 03:21:48.708954 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:21:48.708964 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 03:21:48.708975 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:21:48.708985 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:21:48.708996 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 03:21:48.709004 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 03:21:48.709015 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 03:21:48.709024 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 03:21:48.709033 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:21:48.709042 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:21:48.709052 systemd[1]: Reached target slices.target - Slice Units. May 27 03:21:48.709061 systemd[1]: Reached target swap.target - Swaps. May 27 03:21:48.709070 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 03:21:48.709079 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 03:21:48.709090 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 03:21:48.709100 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:21:48.709110 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:21:48.709120 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:21:48.709129 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 03:21:48.709139 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 03:21:48.709149 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 03:21:48.709158 systemd[1]: Mounting media.mount - External Media Directory... May 27 03:21:48.709168 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:21:48.709177 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 03:21:48.709186 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 03:21:48.709195 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 03:21:48.709205 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 03:21:48.709216 systemd[1]: Reached target machines.target - Containers. May 27 03:21:48.709225 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 03:21:48.709234 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:21:48.709244 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:21:48.709253 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 03:21:48.709262 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:21:48.709271 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:21:48.709280 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:21:48.709291 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 03:21:48.709300 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:21:48.709311 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 03:21:48.709320 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 03:21:48.709329 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 03:21:48.709339 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 03:21:48.709348 systemd[1]: Stopped systemd-fsck-usr.service. May 27 03:21:48.709358 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:21:48.709369 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:21:48.709378 kernel: loop: module loaded May 27 03:21:48.709387 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:21:48.709396 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:21:48.709406 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 03:21:48.709415 kernel: fuse: init (API version 7.41) May 27 03:21:48.709424 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 03:21:48.709453 systemd-journald[1299]: Collecting audit messages is disabled. May 27 03:21:48.709479 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:21:48.709488 systemd[1]: verity-setup.service: Deactivated successfully. May 27 03:21:48.709498 systemd[1]: Stopped verity-setup.service. May 27 03:21:48.709507 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:21:48.709518 systemd-journald[1299]: Journal started May 27 03:21:48.709541 systemd-journald[1299]: Runtime Journal (/run/log/journal/939b8a9731ac427fb3e47a39afc7648c) is 8M, max 159M, 151M free. May 27 03:21:48.714970 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 03:21:48.358744 systemd[1]: Queued start job for default target multi-user.target. May 27 03:21:48.366416 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 27 03:21:48.366725 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 03:21:48.725156 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:21:48.725950 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 03:21:48.728490 systemd[1]: Mounted media.mount - External Media Directory. May 27 03:21:48.732588 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 03:21:48.733955 kernel: ACPI: bus type drm_connector registered May 27 03:21:48.734757 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 03:21:48.736280 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 03:21:48.738335 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 03:21:48.744215 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:21:48.748172 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 03:21:48.748317 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 03:21:48.751589 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:21:48.751768 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:21:48.754533 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:21:48.754720 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:21:48.757300 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:21:48.757509 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:21:48.760544 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 03:21:48.760737 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 03:21:48.763275 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:21:48.763466 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:21:48.766052 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:21:48.768463 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:21:48.771542 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 03:21:48.774744 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 03:21:48.786737 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:21:48.788685 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:21:48.792694 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 03:21:48.800059 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 03:21:48.803976 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 03:21:48.804006 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:21:48.807625 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 03:21:48.810129 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 03:21:48.812143 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:21:48.813405 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 03:21:48.816866 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 03:21:48.818656 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:21:48.819775 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 03:21:48.821899 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:21:48.823588 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:21:48.826554 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 03:21:48.831783 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:21:48.835369 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 03:21:48.839040 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 03:21:48.846045 systemd-journald[1299]: Time spent on flushing to /var/log/journal/939b8a9731ac427fb3e47a39afc7648c is 51.232ms for 989 entries. May 27 03:21:48.846045 systemd-journald[1299]: System Journal (/var/log/journal/939b8a9731ac427fb3e47a39afc7648c) is 11.8M, max 2.6G, 2.6G free. May 27 03:21:48.988643 systemd-journald[1299]: Received client request to flush runtime journal. May 27 03:21:48.988688 kernel: loop0: detected capacity change from 0 to 146240 May 27 03:21:48.988707 systemd-journald[1299]: /var/log/journal/939b8a9731ac427fb3e47a39afc7648c/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. May 27 03:21:48.988731 systemd-journald[1299]: Rotating system journal. May 27 03:21:48.850050 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 03:21:48.855151 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 03:21:48.859515 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 03:21:48.881068 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. May 27 03:21:48.881078 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. May 27 03:21:48.888211 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:21:48.891699 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:21:48.896338 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 03:21:48.947169 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 03:21:48.990705 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 03:21:49.002812 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 03:21:49.007061 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:21:49.030821 systemd-tmpfiles[1354]: ACLs are not supported, ignoring. May 27 03:21:49.030838 systemd-tmpfiles[1354]: ACLs are not supported, ignoring. May 27 03:21:49.033311 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:21:49.048079 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 03:21:49.067233 kernel: loop1: detected capacity change from 0 to 28536 May 27 03:21:49.156897 kernel: loop2: detected capacity change from 0 to 113872 May 27 03:21:49.238899 kernel: loop3: detected capacity change from 0 to 229808 May 27 03:21:49.268917 kernel: loop4: detected capacity change from 0 to 146240 May 27 03:21:49.287898 kernel: loop5: detected capacity change from 0 to 28536 May 27 03:21:49.305899 kernel: loop6: detected capacity change from 0 to 113872 May 27 03:21:49.324927 kernel: loop7: detected capacity change from 0 to 229808 May 27 03:21:49.340257 (sd-merge)[1360]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 27 03:21:49.340595 (sd-merge)[1360]: Merged extensions into '/usr'. May 27 03:21:49.346889 systemd[1]: Reload requested from client PID 1334 ('systemd-sysext') (unit systemd-sysext.service)... May 27 03:21:49.346977 systemd[1]: Reloading... May 27 03:21:49.414897 zram_generator::config[1384]: No configuration found. May 27 03:21:49.532781 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:21:49.603871 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 03:21:49.604225 systemd[1]: Reloading finished in 256 ms. May 27 03:21:49.622248 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 03:21:49.624032 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 03:21:49.633758 systemd[1]: Starting ensure-sysext.service... May 27 03:21:49.637983 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:21:49.641572 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:21:49.657197 systemd[1]: Reload requested from client PID 1445 ('systemctl') (unit ensure-sysext.service)... May 27 03:21:49.657214 systemd[1]: Reloading... May 27 03:21:49.665530 systemd-tmpfiles[1446]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 03:21:49.665773 systemd-tmpfiles[1446]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 03:21:49.666068 systemd-tmpfiles[1446]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 03:21:49.666520 systemd-tmpfiles[1446]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 03:21:49.669589 systemd-tmpfiles[1446]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 03:21:49.671175 systemd-tmpfiles[1446]: ACLs are not supported, ignoring. May 27 03:21:49.672105 systemd-tmpfiles[1446]: ACLs are not supported, ignoring. May 27 03:21:49.678591 systemd-udevd[1447]: Using default interface naming scheme 'v255'. May 27 03:21:49.680116 systemd-tmpfiles[1446]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:21:49.680125 systemd-tmpfiles[1446]: Skipping /boot May 27 03:21:49.687395 systemd-tmpfiles[1446]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:21:49.687407 systemd-tmpfiles[1446]: Skipping /boot May 27 03:21:49.740916 zram_generator::config[1478]: No configuration found. May 27 03:21:49.927259 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:21:49.970041 kernel: hv_vmbus: registering driver hv_balloon May 27 03:21:49.976896 kernel: mousedev: PS/2 mouse device common for all mice May 27 03:21:50.001346 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#125 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 27 03:21:50.001565 kernel: hv_vmbus: registering driver hyperv_fb May 27 03:21:50.029923 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 27 03:21:50.030001 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 27 03:21:50.030018 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 27 03:21:50.037819 kernel: Console: switching to colour dummy device 80x25 May 27 03:21:50.040792 kernel: Console: switching to colour frame buffer device 128x48 May 27 03:21:50.068836 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 03:21:50.069066 systemd[1]: Reloading finished in 411 ms. May 27 03:21:50.074671 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:21:50.077683 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:21:50.105091 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:21:50.109319 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 03:21:50.116997 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 03:21:50.127900 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:21:50.136200 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:21:50.141053 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 03:21:50.152682 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:21:50.152835 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:21:50.154874 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:21:50.164970 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:21:50.173085 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:21:50.174872 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:21:50.174995 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:21:50.177413 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 03:21:50.178935 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:21:50.179828 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 03:21:50.198718 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:21:50.200341 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:21:50.213019 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 03:21:50.221363 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:21:50.222032 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:21:50.224809 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:21:50.225103 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:21:50.233785 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:21:50.235599 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:21:50.237348 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:21:50.242069 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:21:50.243577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:21:50.243782 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:21:50.244002 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:21:50.244561 systemd[1]: Reached target time-set.target - System Time Set. May 27 03:21:50.245154 ldconfig[1329]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 03:21:50.248794 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:21:50.256171 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:21:50.263490 systemd[1]: Finished ensure-sysext.service. May 27 03:21:50.265712 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:21:50.267075 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:21:50.269793 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 03:21:50.273444 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:21:50.273577 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:21:50.309430 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:21:50.312191 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 03:21:50.318507 augenrules[1652]: No rules May 27 03:21:50.320252 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:21:50.324024 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:21:50.327506 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:21:50.328517 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:21:50.341353 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. May 27 03:21:50.346274 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 03:21:50.354803 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 03:21:50.384852 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 03:21:50.388382 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:21:50.391961 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 03:21:50.408268 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 03:21:50.424183 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:21:50.424346 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:21:50.427449 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:21:50.433021 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:21:50.466920 kernel: kvm_intel: Using Hyper-V Enlightened VMCS May 27 03:21:50.468327 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 03:21:50.538211 systemd-networkd[1599]: lo: Link UP May 27 03:21:50.538234 systemd-resolved[1601]: Positive Trust Anchors: May 27 03:21:50.538243 systemd-resolved[1601]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:21:50.538274 systemd-resolved[1601]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:21:50.538468 systemd-networkd[1599]: lo: Gained carrier May 27 03:21:50.540271 systemd-networkd[1599]: Enumeration completed May 27 03:21:50.540449 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:21:50.542640 systemd-networkd[1599]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:21:50.542721 systemd-networkd[1599]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:21:50.543070 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 03:21:50.543977 systemd-resolved[1601]: Using system hostname 'ci-4344.0.0-a-98ca04e8ee'. May 27 03:21:50.547236 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 03:21:50.548089 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 May 27 03:21:50.553930 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 May 27 03:21:50.554275 kernel: hv_netvsc f8615163-0000-1000-2000-000d3adbd577 eth0: Data path switched to VF: enP30832s1 May 27 03:21:50.563017 systemd-networkd[1599]: enP30832s1: Link UP May 27 03:21:50.563165 systemd-networkd[1599]: eth0: Link UP May 27 03:21:50.563205 systemd-networkd[1599]: eth0: Gained carrier May 27 03:21:50.563245 systemd-networkd[1599]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:21:50.565988 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:21:50.566155 systemd[1]: Reached target network.target - Network. May 27 03:21:50.566698 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:21:50.572668 systemd-networkd[1599]: enP30832s1: Gained carrier May 27 03:21:50.574680 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 03:21:50.577948 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:21:50.580944 systemd-networkd[1599]: eth0: DHCPv4 address 10.200.8.16/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 27 03:21:50.581100 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:21:50.585034 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 03:21:50.586316 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 03:21:50.587694 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 03:21:50.589118 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 03:21:50.591978 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 03:21:50.594915 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 03:21:50.596129 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 03:21:50.596152 systemd[1]: Reached target paths.target - Path Units. May 27 03:21:50.598914 systemd[1]: Reached target timers.target - Timer Units. May 27 03:21:50.601663 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 03:21:50.605626 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 03:21:50.608352 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 03:21:50.611043 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 03:21:50.612677 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 03:21:50.620241 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 03:21:50.623443 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 03:21:50.628304 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 03:21:50.630074 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:21:50.631378 systemd[1]: Reached target basic.target - Basic System. May 27 03:21:50.633981 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 03:21:50.634005 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 03:21:50.635667 systemd[1]: Starting chronyd.service - NTP client/server... May 27 03:21:50.638152 systemd[1]: Starting containerd.service - containerd container runtime... May 27 03:21:50.645986 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 03:21:50.648972 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 03:21:50.652394 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 03:21:50.657165 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 03:21:50.662018 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 03:21:50.664962 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 03:21:50.665758 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 03:21:50.671043 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 03:21:50.677791 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 03:21:50.680752 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 03:21:50.685460 jq[1692]: false May 27 03:21:50.687258 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 03:21:50.694028 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 03:21:50.696868 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 03:21:50.700906 google_oslogin_nss_cache[1695]: oslogin_cache_refresh[1695]: Refreshing passwd entry cache May 27 03:21:50.699777 oslogin_cache_refresh[1695]: Refreshing passwd entry cache May 27 03:21:50.701392 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 03:21:50.703175 systemd[1]: Starting update-engine.service - Update Engine... May 27 03:21:50.706017 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 03:21:50.718342 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 03:21:50.723241 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 03:21:50.724573 jq[1706]: true May 27 03:21:50.723408 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 03:21:50.725135 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 03:21:50.725235 oslogin_cache_refresh[1695]: Failure getting users, quitting May 27 03:21:50.725809 google_oslogin_nss_cache[1695]: oslogin_cache_refresh[1695]: Failure getting users, quitting May 27 03:21:50.725809 google_oslogin_nss_cache[1695]: oslogin_cache_refresh[1695]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:21:50.725809 google_oslogin_nss_cache[1695]: oslogin_cache_refresh[1695]: Refreshing group entry cache May 27 03:21:50.725252 oslogin_cache_refresh[1695]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:21:50.725290 oslogin_cache_refresh[1695]: Refreshing group entry cache May 27 03:21:50.729079 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 03:21:50.739014 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 03:21:50.735993 oslogin_cache_refresh[1695]: Failure getting groups, quitting May 27 03:21:50.741568 extend-filesystems[1693]: Found loop4 May 27 03:21:50.741568 extend-filesystems[1693]: Found loop5 May 27 03:21:50.741568 extend-filesystems[1693]: Found loop6 May 27 03:21:50.741568 extend-filesystems[1693]: Found loop7 May 27 03:21:50.741568 extend-filesystems[1693]: Found sr0 May 27 03:21:50.741568 extend-filesystems[1693]: Found nvme0n1 May 27 03:21:50.741568 extend-filesystems[1693]: Found nvme0n1p1 May 27 03:21:50.741568 extend-filesystems[1693]: Found nvme0n1p2 May 27 03:21:50.741568 extend-filesystems[1693]: Found nvme0n1p3 May 27 03:21:50.741568 extend-filesystems[1693]: Found usr May 27 03:21:50.741568 extend-filesystems[1693]: Found nvme0n1p4 May 27 03:21:50.741568 extend-filesystems[1693]: Found nvme0n1p6 May 27 03:21:50.741568 extend-filesystems[1693]: Found nvme0n1p7 May 27 03:21:50.741568 extend-filesystems[1693]: Found nvme0n1p9 May 27 03:21:50.741568 extend-filesystems[1693]: Checking size of /dev/nvme0n1p9 May 27 03:21:50.791569 google_oslogin_nss_cache[1695]: oslogin_cache_refresh[1695]: Failure getting groups, quitting May 27 03:21:50.791569 google_oslogin_nss_cache[1695]: oslogin_cache_refresh[1695]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:21:50.739197 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 03:21:50.736002 oslogin_cache_refresh[1695]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:21:50.791694 update_engine[1705]: I20250527 03:21:50.770143 1705 main.cc:92] Flatcar Update Engine starting May 27 03:21:50.791837 extend-filesystems[1693]: Old size kept for /dev/nvme0n1p9 May 27 03:21:50.743266 (chronyd)[1687]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 27 03:21:50.759528 chronyd[1729]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 27 03:21:50.750691 systemd[1]: motdgen.service: Deactivated successfully. May 27 03:21:50.768203 chronyd[1729]: Timezone right/UTC failed leap second check, ignoring May 27 03:21:50.755145 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 03:21:50.768340 chronyd[1729]: Loaded seccomp filter (level 2) May 27 03:21:50.794143 jq[1716]: true May 27 03:21:50.771748 systemd[1]: Started chronyd.service - NTP client/server. May 27 03:21:50.782744 (ntainerd)[1720]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 03:21:50.784856 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 03:21:50.785065 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 03:21:50.818820 dbus-daemon[1690]: [system] SELinux support is enabled May 27 03:21:50.818952 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 03:21:50.822691 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 03:21:50.822709 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 03:21:50.825697 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 03:21:50.825715 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 03:21:50.837185 systemd[1]: Started update-engine.service - Update Engine. May 27 03:21:50.840174 update_engine[1705]: I20250527 03:21:50.839923 1705 update_check_scheduler.cc:74] Next update check in 11m42s May 27 03:21:50.840601 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 03:21:50.900069 systemd-logind[1703]: New seat seat0. May 27 03:21:50.901160 tar[1710]: linux-amd64/LICENSE May 27 03:21:50.901296 tar[1710]: linux-amd64/helm May 27 03:21:50.903100 systemd-logind[1703]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 03:21:50.903214 systemd[1]: Started systemd-logind.service - User Login Management. May 27 03:21:50.930891 bash[1763]: Updated "/home/core/.ssh/authorized_keys" May 27 03:21:50.927158 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 03:21:50.931982 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 03:21:50.957991 coreos-metadata[1689]: May 27 03:21:50.957 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 27 03:21:50.961772 coreos-metadata[1689]: May 27 03:21:50.961 INFO Fetch successful May 27 03:21:50.961772 coreos-metadata[1689]: May 27 03:21:50.961 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 27 03:21:50.964990 coreos-metadata[1689]: May 27 03:21:50.964 INFO Fetch successful May 27 03:21:50.965281 coreos-metadata[1689]: May 27 03:21:50.965 INFO Fetching http://168.63.129.16/machine/fb01c612-b2c2-4092-a518-3f48312a7d9e/ccd147f6%2Deeff%2D4db0%2Db37c%2Dda53100b2871.%5Fci%2D4344.0.0%2Da%2D98ca04e8ee?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 27 03:21:50.968152 coreos-metadata[1689]: May 27 03:21:50.967 INFO Fetch successful May 27 03:21:50.968469 coreos-metadata[1689]: May 27 03:21:50.968 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 27 03:21:50.981887 coreos-metadata[1689]: May 27 03:21:50.981 INFO Fetch successful May 27 03:21:51.029074 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 03:21:51.031494 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 03:21:51.109977 locksmithd[1746]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 03:21:51.195886 containerd[1720]: time="2025-05-27T03:21:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 03:21:51.198242 containerd[1720]: time="2025-05-27T03:21:51.198206529Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 03:21:51.223753 containerd[1720]: time="2025-05-27T03:21:51.223698610Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.128µs" May 27 03:21:51.223834 containerd[1720]: time="2025-05-27T03:21:51.223817443Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 03:21:51.223897 containerd[1720]: time="2025-05-27T03:21:51.223888759Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 03:21:51.224041 containerd[1720]: time="2025-05-27T03:21:51.224033058Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 03:21:51.226503 containerd[1720]: time="2025-05-27T03:21:51.225891910Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 03:21:51.226503 containerd[1720]: time="2025-05-27T03:21:51.225918345Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:21:51.226503 containerd[1720]: time="2025-05-27T03:21:51.225973537Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:21:51.226503 containerd[1720]: time="2025-05-27T03:21:51.225983271Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:21:51.226503 containerd[1720]: time="2025-05-27T03:21:51.226186424Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:21:51.226503 containerd[1720]: time="2025-05-27T03:21:51.226196884Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:21:51.226503 containerd[1720]: time="2025-05-27T03:21:51.226206561Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:21:51.226503 containerd[1720]: time="2025-05-27T03:21:51.226213903Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 03:21:51.226503 containerd[1720]: time="2025-05-27T03:21:51.226262062Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 03:21:51.226503 containerd[1720]: time="2025-05-27T03:21:51.226419320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:21:51.226503 containerd[1720]: time="2025-05-27T03:21:51.226440430Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:21:51.226732 containerd[1720]: time="2025-05-27T03:21:51.226480013Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 03:21:51.227086 containerd[1720]: time="2025-05-27T03:21:51.226766298Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 03:21:51.227086 containerd[1720]: time="2025-05-27T03:21:51.227022060Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 03:21:51.227086 containerd[1720]: time="2025-05-27T03:21:51.227070216Z" level=info msg="metadata content store policy set" policy=shared May 27 03:21:51.237101 containerd[1720]: time="2025-05-27T03:21:51.237075845Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 03:21:51.237198 containerd[1720]: time="2025-05-27T03:21:51.237188212Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 03:21:51.237263 containerd[1720]: time="2025-05-27T03:21:51.237255015Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 03:21:51.239329 containerd[1720]: time="2025-05-27T03:21:51.238894499Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 03:21:51.239329 containerd[1720]: time="2025-05-27T03:21:51.238911459Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 03:21:51.239329 containerd[1720]: time="2025-05-27T03:21:51.238923099Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 03:21:51.239329 containerd[1720]: time="2025-05-27T03:21:51.238937377Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 03:21:51.239329 containerd[1720]: time="2025-05-27T03:21:51.238949290Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 03:21:51.239329 containerd[1720]: time="2025-05-27T03:21:51.238960639Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 03:21:51.239329 containerd[1720]: time="2025-05-27T03:21:51.238970277Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 03:21:51.239329 containerd[1720]: time="2025-05-27T03:21:51.238979080Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 03:21:51.239329 containerd[1720]: time="2025-05-27T03:21:51.238990618Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 03:21:51.239329 containerd[1720]: time="2025-05-27T03:21:51.239080202Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 03:21:51.239329 containerd[1720]: time="2025-05-27T03:21:51.239100743Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 03:21:51.239329 containerd[1720]: time="2025-05-27T03:21:51.239121664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 03:21:51.239329 containerd[1720]: time="2025-05-27T03:21:51.239135193Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 03:21:51.239329 containerd[1720]: time="2025-05-27T03:21:51.239145279Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 03:21:51.239603 containerd[1720]: time="2025-05-27T03:21:51.239154922Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 03:21:51.239603 containerd[1720]: time="2025-05-27T03:21:51.239178787Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 03:21:51.239603 containerd[1720]: time="2025-05-27T03:21:51.239191838Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 03:21:51.239603 containerd[1720]: time="2025-05-27T03:21:51.239210525Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 03:21:51.239603 containerd[1720]: time="2025-05-27T03:21:51.239220399Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 03:21:51.239603 containerd[1720]: time="2025-05-27T03:21:51.239231219Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 03:21:51.239603 containerd[1720]: time="2025-05-27T03:21:51.239292366Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 03:21:51.239603 containerd[1720]: time="2025-05-27T03:21:51.239305179Z" level=info msg="Start snapshots syncer" May 27 03:21:51.240166 containerd[1720]: time="2025-05-27T03:21:51.239774079Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 03:21:51.240166 containerd[1720]: time="2025-05-27T03:21:51.240066184Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 03:21:51.240322 containerd[1720]: time="2025-05-27T03:21:51.240112368Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 03:21:51.243297 containerd[1720]: time="2025-05-27T03:21:51.242902037Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 03:21:51.243297 containerd[1720]: time="2025-05-27T03:21:51.243023334Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 03:21:51.243297 containerd[1720]: time="2025-05-27T03:21:51.243056237Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 03:21:51.243297 containerd[1720]: time="2025-05-27T03:21:51.243069437Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 03:21:51.243297 containerd[1720]: time="2025-05-27T03:21:51.243081544Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 03:21:51.243297 containerd[1720]: time="2025-05-27T03:21:51.243094707Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 03:21:51.243297 containerd[1720]: time="2025-05-27T03:21:51.243105671Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 03:21:51.243297 containerd[1720]: time="2025-05-27T03:21:51.243122730Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 03:21:51.243297 containerd[1720]: time="2025-05-27T03:21:51.243152472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 03:21:51.243297 containerd[1720]: time="2025-05-27T03:21:51.243162890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 03:21:51.243297 containerd[1720]: time="2025-05-27T03:21:51.243177052Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 03:21:51.246722 containerd[1720]: time="2025-05-27T03:21:51.245901003Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:21:51.246722 containerd[1720]: time="2025-05-27T03:21:51.245931866Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:21:51.246722 containerd[1720]: time="2025-05-27T03:21:51.245940674Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:21:51.246722 containerd[1720]: time="2025-05-27T03:21:51.245990469Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:21:51.246722 containerd[1720]: time="2025-05-27T03:21:51.245997722Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 03:21:51.246722 containerd[1720]: time="2025-05-27T03:21:51.246005944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 03:21:51.246722 containerd[1720]: time="2025-05-27T03:21:51.246017133Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 03:21:51.246722 containerd[1720]: time="2025-05-27T03:21:51.246035134Z" level=info msg="runtime interface created" May 27 03:21:51.246722 containerd[1720]: time="2025-05-27T03:21:51.246039894Z" level=info msg="created NRI interface" May 27 03:21:51.246722 containerd[1720]: time="2025-05-27T03:21:51.246052274Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 03:21:51.246722 containerd[1720]: time="2025-05-27T03:21:51.246066637Z" level=info msg="Connect containerd service" May 27 03:21:51.246722 containerd[1720]: time="2025-05-27T03:21:51.246105458Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 03:21:51.247165 containerd[1720]: time="2025-05-27T03:21:51.247145925Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:21:51.478965 containerd[1720]: time="2025-05-27T03:21:51.478907956Z" level=info msg="Start subscribing containerd event" May 27 03:21:51.479082 containerd[1720]: time="2025-05-27T03:21:51.479054112Z" level=info msg="Start recovering state" May 27 03:21:51.479177 containerd[1720]: time="2025-05-27T03:21:51.479170567Z" level=info msg="Start event monitor" May 27 03:21:51.479219 containerd[1720]: time="2025-05-27T03:21:51.479211936Z" level=info msg="Start cni network conf syncer for default" May 27 03:21:51.479255 containerd[1720]: time="2025-05-27T03:21:51.479249593Z" level=info msg="Start streaming server" May 27 03:21:51.484657 containerd[1720]: time="2025-05-27T03:21:51.480898085Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 03:21:51.484657 containerd[1720]: time="2025-05-27T03:21:51.480910869Z" level=info msg="runtime interface starting up..." May 27 03:21:51.484657 containerd[1720]: time="2025-05-27T03:21:51.480918323Z" level=info msg="starting plugins..." May 27 03:21:51.484657 containerd[1720]: time="2025-05-27T03:21:51.480933073Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 03:21:51.484657 containerd[1720]: time="2025-05-27T03:21:51.479367412Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 03:21:51.484657 containerd[1720]: time="2025-05-27T03:21:51.481046004Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 03:21:51.484657 containerd[1720]: time="2025-05-27T03:21:51.483962620Z" level=info msg="containerd successfully booted in 0.289595s" May 27 03:21:51.481165 systemd[1]: Started containerd.service - containerd container runtime. May 27 03:21:51.493243 tar[1710]: linux-amd64/README.md May 27 03:21:51.510067 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 03:21:51.556700 sshd_keygen[1730]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 03:21:51.572037 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 03:21:51.577340 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 03:21:51.591623 systemd[1]: issuegen.service: Deactivated successfully. May 27 03:21:51.591798 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 03:21:51.594292 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 03:21:51.609500 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 03:21:51.612201 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 03:21:51.615352 systemd-networkd[1599]: enP30832s1: Gained IPv6LL May 27 03:21:51.616292 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 03:21:51.622085 systemd[1]: Reached target getty.target - Login Prompts. May 27 03:21:51.934120 systemd-networkd[1599]: eth0: Gained IPv6LL May 27 03:21:51.936770 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 03:21:51.942418 systemd[1]: Reached target network-online.target - Network is Online. May 27 03:21:51.946159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:21:51.952055 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 03:21:51.955308 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 27 03:21:51.982057 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 27 03:21:51.987052 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 03:21:52.568852 waagent[1833]: 2025-05-27T03:21:52.568494Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 May 27 03:21:52.571001 waagent[1833]: 2025-05-27T03:21:52.570951Z INFO Daemon Daemon OS: flatcar 4344.0.0 May 27 03:21:52.572615 waagent[1833]: 2025-05-27T03:21:52.572573Z INFO Daemon Daemon Python: 3.11.12 May 27 03:21:52.576182 waagent[1833]: 2025-05-27T03:21:52.576111Z INFO Daemon Daemon Run daemon May 27 03:21:52.577987 waagent[1833]: 2025-05-27T03:21:52.577718Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.0.0' May 27 03:21:52.581893 waagent[1833]: 2025-05-27T03:21:52.580795Z INFO Daemon Daemon Using waagent for provisioning May 27 03:21:52.584096 waagent[1833]: 2025-05-27T03:21:52.584055Z INFO Daemon Daemon Activate resource disk May 27 03:21:52.585895 waagent[1833]: 2025-05-27T03:21:52.585764Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 27 03:21:52.590165 waagent[1833]: 2025-05-27T03:21:52.590132Z INFO Daemon Daemon Found device: None May 27 03:21:52.591671 waagent[1833]: 2025-05-27T03:21:52.591411Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 27 03:21:52.593414 waagent[1833]: 2025-05-27T03:21:52.593390Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 27 03:21:52.598421 waagent[1833]: 2025-05-27T03:21:52.598388Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 27 03:21:52.601701 waagent[1833]: 2025-05-27T03:21:52.601574Z INFO Daemon Daemon Running default provisioning handler May 27 03:21:52.612690 waagent[1833]: 2025-05-27T03:21:52.611583Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 27 03:21:52.615725 waagent[1833]: 2025-05-27T03:21:52.615692Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 27 03:21:52.619159 waagent[1833]: 2025-05-27T03:21:52.618853Z INFO Daemon Daemon cloud-init is enabled: False May 27 03:21:52.620677 waagent[1833]: 2025-05-27T03:21:52.620630Z INFO Daemon Daemon Copying ovf-env.xml May 27 03:21:52.661471 waagent[1833]: 2025-05-27T03:21:52.661426Z INFO Daemon Daemon Successfully mounted dvd May 27 03:21:52.678126 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 27 03:21:52.680426 waagent[1833]: 2025-05-27T03:21:52.680392Z INFO Daemon Daemon Detect protocol endpoint May 27 03:21:52.681770 waagent[1833]: 2025-05-27T03:21:52.681558Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 27 03:21:52.684995 waagent[1833]: 2025-05-27T03:21:52.684949Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 27 03:21:52.688066 waagent[1833]: 2025-05-27T03:21:52.687932Z INFO Daemon Daemon Test for route to 168.63.129.16 May 27 03:21:52.689697 waagent[1833]: 2025-05-27T03:21:52.689655Z INFO Daemon Daemon Route to 168.63.129.16 exists May 27 03:21:52.690929 waagent[1833]: 2025-05-27T03:21:52.690859Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 27 03:21:52.704026 waagent[1833]: 2025-05-27T03:21:52.703646Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 27 03:21:52.706294 waagent[1833]: 2025-05-27T03:21:52.705901Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 27 03:21:52.707450 waagent[1833]: 2025-05-27T03:21:52.707420Z INFO Daemon Daemon Server preferred version:2015-04-05 May 27 03:21:52.751421 waagent[1833]: 2025-05-27T03:21:52.751383Z INFO Daemon Daemon Initializing goal state during protocol detection May 27 03:21:52.752911 waagent[1833]: 2025-05-27T03:21:52.752868Z INFO Daemon Daemon Forcing an update of the goal state. May 27 03:21:52.760179 waagent[1833]: 2025-05-27T03:21:52.760151Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 27 03:21:52.772757 waagent[1833]: 2025-05-27T03:21:52.772727Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 27 03:21:52.776574 waagent[1833]: 2025-05-27T03:21:52.776332Z INFO Daemon May 27 03:21:52.777621 waagent[1833]: 2025-05-27T03:21:52.777254Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: d53a47a3-7ef4-4db2-a98e-a76e75bb3a4b eTag: 9624231321512262265 source: Fabric] May 27 03:21:52.780613 waagent[1833]: 2025-05-27T03:21:52.780574Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 27 03:21:52.782849 waagent[1833]: 2025-05-27T03:21:52.782518Z INFO Daemon May 27 03:21:52.784957 waagent[1833]: 2025-05-27T03:21:52.784912Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 27 03:21:52.792106 waagent[1833]: 2025-05-27T03:21:52.790543Z INFO Daemon Daemon Downloading artifacts profile blob May 27 03:21:52.810722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:21:52.813213 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 03:21:52.817164 systemd[1]: Startup finished in 2.855s (kernel) + 8.863s (initrd) + 5.264s (userspace) = 16.983s. May 27 03:21:52.825236 (kubelet)[1850]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:21:52.869418 waagent[1833]: 2025-05-27T03:21:52.869385Z INFO Daemon Downloaded certificate {'thumbprint': '70052F7A2CFDF10023730DF259CE4E5474B1D050', 'hasPrivateKey': True} May 27 03:21:52.871609 waagent[1833]: 2025-05-27T03:21:52.871577Z INFO Daemon Fetch goal state completed May 27 03:21:52.879209 waagent[1833]: 2025-05-27T03:21:52.878472Z INFO Daemon Daemon Starting provisioning May 27 03:21:52.879209 waagent[1833]: 2025-05-27T03:21:52.878602Z INFO Daemon Daemon Handle ovf-env.xml. May 27 03:21:52.879209 waagent[1833]: 2025-05-27T03:21:52.878761Z INFO Daemon Daemon Set hostname [ci-4344.0.0-a-98ca04e8ee] May 27 03:21:52.884134 waagent[1833]: 2025-05-27T03:21:52.883326Z INFO Daemon Daemon Publish hostname [ci-4344.0.0-a-98ca04e8ee] May 27 03:21:52.884134 waagent[1833]: 2025-05-27T03:21:52.883570Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 27 03:21:52.884134 waagent[1833]: 2025-05-27T03:21:52.883814Z INFO Daemon Daemon Primary interface is [eth0] May 27 03:21:52.890567 systemd-networkd[1599]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:21:52.891124 systemd-networkd[1599]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:21:52.891244 systemd-networkd[1599]: eth0: DHCP lease lost May 27 03:21:52.891732 waagent[1833]: 2025-05-27T03:21:52.891697Z INFO Daemon Daemon Create user account if not exists May 27 03:21:52.892150 waagent[1833]: 2025-05-27T03:21:52.892125Z INFO Daemon Daemon User core already exists, skip useradd May 27 03:21:52.892435 waagent[1833]: 2025-05-27T03:21:52.892419Z INFO Daemon Daemon Configure sudoer May 27 03:21:52.895863 waagent[1833]: 2025-05-27T03:21:52.895816Z INFO Daemon Daemon Configure sshd May 27 03:21:52.900003 waagent[1833]: 2025-05-27T03:21:52.899830Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 27 03:21:52.905194 waagent[1833]: 2025-05-27T03:21:52.900171Z INFO Daemon Daemon Deploy ssh public key. May 27 03:21:52.909385 login[1814]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 27 03:21:52.910726 login[1815]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 27 03:21:52.911102 systemd-networkd[1599]: eth0: DHCPv4 address 10.200.8.16/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 27 03:21:52.917288 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 03:21:52.920292 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 03:21:52.929285 systemd-logind[1703]: New session 2 of user core. May 27 03:21:52.934153 systemd-logind[1703]: New session 1 of user core. May 27 03:21:52.939783 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 03:21:52.942191 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 03:21:52.954061 (systemd)[1868]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 03:21:52.955950 systemd-logind[1703]: New session c1 of user core. May 27 03:21:53.106869 systemd[1868]: Queued start job for default target default.target. May 27 03:21:53.112646 systemd[1868]: Created slice app.slice - User Application Slice. May 27 03:21:53.112917 systemd[1868]: Reached target paths.target - Paths. May 27 03:21:53.112996 systemd[1868]: Reached target timers.target - Timers. May 27 03:21:53.114216 systemd[1868]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 03:21:53.123442 systemd[1868]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 03:21:53.123489 systemd[1868]: Reached target sockets.target - Sockets. May 27 03:21:53.123559 systemd[1868]: Reached target basic.target - Basic System. May 27 03:21:53.123600 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 03:21:53.123720 systemd[1868]: Reached target default.target - Main User Target. May 27 03:21:53.123743 systemd[1868]: Startup finished in 162ms. May 27 03:21:53.125151 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 03:21:53.126112 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 03:21:53.375980 kubelet[1850]: E0527 03:21:53.375866 1850 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:21:53.377595 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:21:53.377730 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:21:53.378066 systemd[1]: kubelet.service: Consumed 868ms CPU time, 268M memory peak. May 27 03:21:53.963894 waagent[1833]: 2025-05-27T03:21:53.963827Z INFO Daemon Daemon Provisioning complete May 27 03:21:53.976294 waagent[1833]: 2025-05-27T03:21:53.976261Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 27 03:21:53.976585 waagent[1833]: 2025-05-27T03:21:53.976557Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 27 03:21:53.979512 waagent[1833]: 2025-05-27T03:21:53.979447Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent May 27 03:21:54.072925 waagent[1902]: 2025-05-27T03:21:54.072856Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) May 27 03:21:54.073245 waagent[1902]: 2025-05-27T03:21:54.072953Z INFO ExtHandler ExtHandler OS: flatcar 4344.0.0 May 27 03:21:54.073245 waagent[1902]: 2025-05-27T03:21:54.072989Z INFO ExtHandler ExtHandler Python: 3.11.12 May 27 03:21:54.073245 waagent[1902]: 2025-05-27T03:21:54.073022Z INFO ExtHandler ExtHandler CPU Arch: x86_64 May 27 03:21:54.082450 waagent[1902]: 2025-05-27T03:21:54.082406Z INFO ExtHandler ExtHandler Distro: flatcar-4344.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; May 27 03:21:54.082565 waagent[1902]: 2025-05-27T03:21:54.082543Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 27 03:21:54.082607 waagent[1902]: 2025-05-27T03:21:54.082591Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 27 03:21:54.095002 waagent[1902]: 2025-05-27T03:21:54.094962Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 27 03:21:54.102682 waagent[1902]: 2025-05-27T03:21:54.102651Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 27 03:21:54.102987 waagent[1902]: 2025-05-27T03:21:54.102961Z INFO ExtHandler May 27 03:21:54.103030 waagent[1902]: 2025-05-27T03:21:54.103010Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d6a22f9a-1167-4259-8cf1-26ed0a6414a5 eTag: 9624231321512262265 source: Fabric] May 27 03:21:54.103190 waagent[1902]: 2025-05-27T03:21:54.103172Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 27 03:21:54.103493 waagent[1902]: 2025-05-27T03:21:54.103472Z INFO ExtHandler May 27 03:21:54.103528 waagent[1902]: 2025-05-27T03:21:54.103508Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 27 03:21:54.107734 waagent[1902]: 2025-05-27T03:21:54.107709Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 27 03:21:54.176464 waagent[1902]: 2025-05-27T03:21:54.176422Z INFO ExtHandler Downloaded certificate {'thumbprint': '70052F7A2CFDF10023730DF259CE4E5474B1D050', 'hasPrivateKey': True} May 27 03:21:54.176753 waagent[1902]: 2025-05-27T03:21:54.176729Z INFO ExtHandler Fetch goal state completed May 27 03:21:54.192780 waagent[1902]: 2025-05-27T03:21:54.192741Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) May 27 03:21:54.196524 waagent[1902]: 2025-05-27T03:21:54.196480Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1902 May 27 03:21:54.196623 waagent[1902]: 2025-05-27T03:21:54.196588Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 27 03:21:54.196825 waagent[1902]: 2025-05-27T03:21:54.196807Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** May 27 03:21:54.197705 waagent[1902]: 2025-05-27T03:21:54.197675Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.0.0', '', 'Flatcar Container Linux by Kinvolk'] May 27 03:21:54.197984 waagent[1902]: 2025-05-27T03:21:54.197963Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported May 27 03:21:54.198087 waagent[1902]: 2025-05-27T03:21:54.198072Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 27 03:21:54.198430 waagent[1902]: 2025-05-27T03:21:54.198412Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 27 03:21:54.204643 waagent[1902]: 2025-05-27T03:21:54.204623Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 27 03:21:54.204758 waagent[1902]: 2025-05-27T03:21:54.204741Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 27 03:21:54.210078 waagent[1902]: 2025-05-27T03:21:54.209930Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 27 03:21:54.214833 systemd[1]: Reload requested from client PID 1917 ('systemctl') (unit waagent.service)... May 27 03:21:54.214845 systemd[1]: Reloading... May 27 03:21:54.293915 zram_generator::config[1954]: No configuration found. May 27 03:21:54.360698 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:21:54.443425 systemd[1]: Reloading finished in 228 ms. May 27 03:21:54.466301 waagent[1902]: 2025-05-27T03:21:54.464907Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 27 03:21:54.466301 waagent[1902]: 2025-05-27T03:21:54.465018Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 27 03:21:54.552777 waagent[1902]: 2025-05-27T03:21:54.552735Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 27 03:21:54.553020 waagent[1902]: 2025-05-27T03:21:54.552997Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 27 03:21:54.553615 waagent[1902]: 2025-05-27T03:21:54.553588Z INFO ExtHandler ExtHandler Starting env monitor service. May 27 03:21:54.553848 waagent[1902]: 2025-05-27T03:21:54.553827Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 27 03:21:54.553943 waagent[1902]: 2025-05-27T03:21:54.553895Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 27 03:21:54.554102 waagent[1902]: 2025-05-27T03:21:54.554082Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 27 03:21:54.554264 waagent[1902]: 2025-05-27T03:21:54.554236Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 27 03:21:54.554454 waagent[1902]: 2025-05-27T03:21:54.554423Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 27 03:21:54.554590 waagent[1902]: 2025-05-27T03:21:54.554569Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 27 03:21:54.554836 waagent[1902]: 2025-05-27T03:21:54.554817Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 27 03:21:54.554836 waagent[1902]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 27 03:21:54.554836 waagent[1902]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 May 27 03:21:54.554836 waagent[1902]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 27 03:21:54.554836 waagent[1902]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 27 03:21:54.554836 waagent[1902]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 27 03:21:54.554836 waagent[1902]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 27 03:21:54.555011 waagent[1902]: 2025-05-27T03:21:54.554868Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 27 03:21:54.555126 waagent[1902]: 2025-05-27T03:21:54.555105Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 27 03:21:54.555315 waagent[1902]: 2025-05-27T03:21:54.555296Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 27 03:21:54.555496 waagent[1902]: 2025-05-27T03:21:54.555477Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 27 03:21:54.556021 waagent[1902]: 2025-05-27T03:21:54.555999Z INFO EnvHandler ExtHandler Configure routes May 27 03:21:54.556324 waagent[1902]: 2025-05-27T03:21:54.556249Z INFO EnvHandler ExtHandler Gateway:None May 27 03:21:54.556324 waagent[1902]: 2025-05-27T03:21:54.556292Z INFO EnvHandler ExtHandler Routes:None May 27 03:21:54.556530 waagent[1902]: 2025-05-27T03:21:54.556511Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 27 03:21:54.562897 waagent[1902]: 2025-05-27T03:21:54.562682Z INFO ExtHandler ExtHandler May 27 03:21:54.562897 waagent[1902]: 2025-05-27T03:21:54.562737Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 8b61203a-7d97-41e7-b424-3d54221ab3db correlation ddb81a64-2e84-4588-b9b0-1e7b8c587ae1 created: 2025-05-27T03:21:20.557655Z] May 27 03:21:54.563086 waagent[1902]: 2025-05-27T03:21:54.563057Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 27 03:21:54.563555 waagent[1902]: 2025-05-27T03:21:54.563524Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] May 27 03:21:54.570423 waagent[1902]: 2025-05-27T03:21:54.570389Z INFO MonitorHandler ExtHandler Network interfaces: May 27 03:21:54.570423 waagent[1902]: Executing ['ip', '-a', '-o', 'link']: May 27 03:21:54.570423 waagent[1902]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 27 03:21:54.570423 waagent[1902]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:db:d5:77 brd ff:ff:ff:ff:ff:ff\ alias Network Device May 27 03:21:54.570423 waagent[1902]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:db:d5:77 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 May 27 03:21:54.570423 waagent[1902]: Executing ['ip', '-4', '-a', '-o', 'address']: May 27 03:21:54.570423 waagent[1902]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 27 03:21:54.570423 waagent[1902]: 2: eth0 inet 10.200.8.16/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever May 27 03:21:54.570423 waagent[1902]: Executing ['ip', '-6', '-a', '-o', 'address']: May 27 03:21:54.570423 waagent[1902]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 27 03:21:54.570423 waagent[1902]: 2: eth0 inet6 fe80::20d:3aff:fedb:d577/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 27 03:21:54.570423 waagent[1902]: 3: enP30832s1 inet6 fe80::20d:3aff:fedb:d577/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 27 03:21:54.588183 waagent[1902]: 2025-05-27T03:21:54.588146Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command May 27 03:21:54.588183 waagent[1902]: Try `iptables -h' or 'iptables --help' for more information.) May 27 03:21:54.588427 waagent[1902]: 2025-05-27T03:21:54.588405Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C15F2632-7432-4B17-99F2-D5B66A26D5CB;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] May 27 03:21:54.604516 waagent[1902]: 2025-05-27T03:21:54.604477Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: May 27 03:21:54.604516 waagent[1902]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 27 03:21:54.604516 waagent[1902]: pkts bytes target prot opt in out source destination May 27 03:21:54.604516 waagent[1902]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 27 03:21:54.604516 waagent[1902]: pkts bytes target prot opt in out source destination May 27 03:21:54.604516 waagent[1902]: Chain OUTPUT (policy ACCEPT 3 packets, 164 bytes) May 27 03:21:54.604516 waagent[1902]: pkts bytes target prot opt in out source destination May 27 03:21:54.604516 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 27 03:21:54.604516 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 27 03:21:54.604516 waagent[1902]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 27 03:21:54.606855 waagent[1902]: 2025-05-27T03:21:54.606805Z INFO EnvHandler ExtHandler Current Firewall rules: May 27 03:21:54.606855 waagent[1902]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 27 03:21:54.606855 waagent[1902]: pkts bytes target prot opt in out source destination May 27 03:21:54.606855 waagent[1902]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 27 03:21:54.606855 waagent[1902]: pkts bytes target prot opt in out source destination May 27 03:21:54.606855 waagent[1902]: Chain OUTPUT (policy ACCEPT 3 packets, 164 bytes) May 27 03:21:54.606855 waagent[1902]: pkts bytes target prot opt in out source destination May 27 03:21:54.606855 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 27 03:21:54.606855 waagent[1902]: 3 535 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 27 03:21:54.606855 waagent[1902]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 27 03:22:03.628835 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 03:22:03.631081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:22:04.023168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:22:04.032118 (kubelet)[2052]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:22:04.069208 kubelet[2052]: E0527 03:22:04.069175 2052 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:22:04.071923 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:22:04.072050 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:22:04.072396 systemd[1]: kubelet.service: Consumed 139ms CPU time, 111M memory peak. May 27 03:22:10.479270 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 03:22:10.480565 systemd[1]: Started sshd@0-10.200.8.16:22-10.200.16.10:55286.service - OpenSSH per-connection server daemon (10.200.16.10:55286). May 27 03:22:11.130599 sshd[2060]: Accepted publickey for core from 10.200.16.10 port 55286 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:22:11.132095 sshd-session[2060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:11.136917 systemd-logind[1703]: New session 3 of user core. May 27 03:22:11.147043 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 03:22:11.701726 systemd[1]: Started sshd@1-10.200.8.16:22-10.200.16.10:39536.service - OpenSSH per-connection server daemon (10.200.16.10:39536). May 27 03:22:12.340844 sshd[2065]: Accepted publickey for core from 10.200.16.10 port 39536 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:22:12.342335 sshd-session[2065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:12.347102 systemd-logind[1703]: New session 4 of user core. May 27 03:22:12.357040 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 03:22:12.801087 sshd[2067]: Connection closed by 10.200.16.10 port 39536 May 27 03:22:12.801994 sshd-session[2065]: pam_unix(sshd:session): session closed for user core May 27 03:22:12.806079 systemd[1]: sshd@1-10.200.8.16:22-10.200.16.10:39536.service: Deactivated successfully. May 27 03:22:12.807612 systemd[1]: session-4.scope: Deactivated successfully. May 27 03:22:12.808498 systemd-logind[1703]: Session 4 logged out. Waiting for processes to exit. May 27 03:22:12.809709 systemd-logind[1703]: Removed session 4. May 27 03:22:12.917152 systemd[1]: Started sshd@2-10.200.8.16:22-10.200.16.10:39542.service - OpenSSH per-connection server daemon (10.200.16.10:39542). May 27 03:22:13.556727 sshd[2073]: Accepted publickey for core from 10.200.16.10 port 39542 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:22:13.558362 sshd-session[2073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:13.563277 systemd-logind[1703]: New session 5 of user core. May 27 03:22:13.568038 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 03:22:14.017585 sshd[2075]: Connection closed by 10.200.16.10 port 39542 May 27 03:22:14.018581 sshd-session[2073]: pam_unix(sshd:session): session closed for user core May 27 03:22:14.022581 systemd[1]: sshd@2-10.200.8.16:22-10.200.16.10:39542.service: Deactivated successfully. May 27 03:22:14.024185 systemd[1]: session-5.scope: Deactivated successfully. May 27 03:22:14.024814 systemd-logind[1703]: Session 5 logged out. Waiting for processes to exit. May 27 03:22:14.026124 systemd-logind[1703]: Removed session 5. May 27 03:22:14.093390 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 03:22:14.095089 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:22:14.126788 systemd[1]: Started sshd@3-10.200.8.16:22-10.200.16.10:39550.service - OpenSSH per-connection server daemon (10.200.16.10:39550). May 27 03:22:14.521306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:22:14.529123 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:22:14.551647 chronyd[1729]: Selected source PHC0 May 27 03:22:14.564969 kubelet[2091]: E0527 03:22:14.564939 2091 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:22:14.566818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:22:14.566968 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:22:14.567367 systemd[1]: kubelet.service: Consumed 136ms CPU time, 108.8M memory peak. May 27 03:22:14.765379 sshd[2084]: Accepted publickey for core from 10.200.16.10 port 39550 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:22:14.766786 sshd-session[2084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:14.771714 systemd-logind[1703]: New session 6 of user core. May 27 03:22:14.780035 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 03:22:15.224902 sshd[2099]: Connection closed by 10.200.16.10 port 39550 May 27 03:22:15.225561 sshd-session[2084]: pam_unix(sshd:session): session closed for user core May 27 03:22:15.229404 systemd[1]: sshd@3-10.200.8.16:22-10.200.16.10:39550.service: Deactivated successfully. May 27 03:22:15.231005 systemd[1]: session-6.scope: Deactivated successfully. May 27 03:22:15.231652 systemd-logind[1703]: Session 6 logged out. Waiting for processes to exit. May 27 03:22:15.232830 systemd-logind[1703]: Removed session 6. May 27 03:22:15.342312 systemd[1]: Started sshd@4-10.200.8.16:22-10.200.16.10:39564.service - OpenSSH per-connection server daemon (10.200.16.10:39564). May 27 03:22:15.986916 sshd[2105]: Accepted publickey for core from 10.200.16.10 port 39564 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:22:15.988303 sshd-session[2105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:15.992979 systemd-logind[1703]: New session 7 of user core. May 27 03:22:16.001027 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 03:22:16.358903 sudo[2108]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 03:22:16.359113 sudo[2108]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:22:16.376248 sudo[2108]: pam_unix(sudo:session): session closed for user root May 27 03:22:16.492634 sshd[2107]: Connection closed by 10.200.16.10 port 39564 May 27 03:22:16.493377 sshd-session[2105]: pam_unix(sshd:session): session closed for user core May 27 03:22:16.497016 systemd[1]: sshd@4-10.200.8.16:22-10.200.16.10:39564.service: Deactivated successfully. May 27 03:22:16.498755 systemd[1]: session-7.scope: Deactivated successfully. May 27 03:22:16.500023 systemd-logind[1703]: Session 7 logged out. Waiting for processes to exit. May 27 03:22:16.501069 systemd-logind[1703]: Removed session 7. May 27 03:22:16.609145 systemd[1]: Started sshd@5-10.200.8.16:22-10.200.16.10:39574.service - OpenSSH per-connection server daemon (10.200.16.10:39574). May 27 03:22:17.256985 sshd[2114]: Accepted publickey for core from 10.200.16.10 port 39574 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:22:17.258420 sshd-session[2114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:17.263118 systemd-logind[1703]: New session 8 of user core. May 27 03:22:17.268048 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 03:22:17.605692 sudo[2118]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 03:22:17.605919 sudo[2118]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:22:17.611448 sudo[2118]: pam_unix(sudo:session): session closed for user root May 27 03:22:17.615171 sudo[2117]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 03:22:17.615362 sudo[2117]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:22:17.622651 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:22:17.651209 augenrules[2140]: No rules May 27 03:22:17.652261 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:22:17.652462 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:22:17.653288 sudo[2117]: pam_unix(sudo:session): session closed for user root May 27 03:22:17.754854 sshd[2116]: Connection closed by 10.200.16.10 port 39574 May 27 03:22:17.755330 sshd-session[2114]: pam_unix(sshd:session): session closed for user core May 27 03:22:17.758708 systemd[1]: sshd@5-10.200.8.16:22-10.200.16.10:39574.service: Deactivated successfully. May 27 03:22:17.760079 systemd[1]: session-8.scope: Deactivated successfully. May 27 03:22:17.760711 systemd-logind[1703]: Session 8 logged out. Waiting for processes to exit. May 27 03:22:17.761725 systemd-logind[1703]: Removed session 8. May 27 03:22:17.871039 systemd[1]: Started sshd@6-10.200.8.16:22-10.200.16.10:39584.service - OpenSSH per-connection server daemon (10.200.16.10:39584). May 27 03:22:18.513496 sshd[2149]: Accepted publickey for core from 10.200.16.10 port 39584 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:22:18.514785 sshd-session[2149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:18.519239 systemd-logind[1703]: New session 9 of user core. May 27 03:22:18.525018 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 03:22:18.860797 sudo[2152]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 03:22:18.861026 sudo[2152]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:22:20.153997 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 03:22:20.164185 (dockerd)[2170]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 03:22:20.388585 dockerd[2170]: time="2025-05-27T03:22:20.388534382Z" level=info msg="Starting up" May 27 03:22:20.389792 dockerd[2170]: time="2025-05-27T03:22:20.389762104Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 03:22:20.523379 dockerd[2170]: time="2025-05-27T03:22:20.523195296Z" level=info msg="Loading containers: start." May 27 03:22:20.539006 kernel: Initializing XFRM netlink socket May 27 03:22:20.729468 systemd-networkd[1599]: docker0: Link UP May 27 03:22:20.741192 dockerd[2170]: time="2025-05-27T03:22:20.741163238Z" level=info msg="Loading containers: done." May 27 03:22:20.759724 dockerd[2170]: time="2025-05-27T03:22:20.759696564Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 03:22:20.759831 dockerd[2170]: time="2025-05-27T03:22:20.759764615Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 03:22:20.759856 dockerd[2170]: time="2025-05-27T03:22:20.759840401Z" level=info msg="Initializing buildkit" May 27 03:22:20.801437 dockerd[2170]: time="2025-05-27T03:22:20.801185926Z" level=info msg="Completed buildkit initialization" May 27 03:22:20.806682 dockerd[2170]: time="2025-05-27T03:22:20.806646717Z" level=info msg="Daemon has completed initialization" May 27 03:22:20.806753 dockerd[2170]: time="2025-05-27T03:22:20.806696403Z" level=info msg="API listen on /run/docker.sock" May 27 03:22:20.806952 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 03:22:21.558349 containerd[1720]: time="2025-05-27T03:22:21.558311216Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 27 03:22:22.141951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1620978069.mount: Deactivated successfully. May 27 03:22:23.350168 containerd[1720]: time="2025-05-27T03:22:23.350107320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:23.352273 containerd[1720]: time="2025-05-27T03:22:23.352237980Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=30075411" May 27 03:22:23.355145 containerd[1720]: time="2025-05-27T03:22:23.355101813Z" level=info msg="ImageCreate event name:\"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:23.358583 containerd[1720]: time="2025-05-27T03:22:23.358528582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:23.359324 containerd[1720]: time="2025-05-27T03:22:23.359139009Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"30072203\" in 1.800785811s" May 27 03:22:23.359324 containerd[1720]: time="2025-05-27T03:22:23.359174109Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\"" May 27 03:22:23.359966 containerd[1720]: time="2025-05-27T03:22:23.359943147Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 27 03:22:24.593361 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 27 03:22:24.596368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:22:25.106955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:22:25.119081 (kubelet)[2435]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:22:25.132643 containerd[1720]: time="2025-05-27T03:22:25.132580187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:25.135614 containerd[1720]: time="2025-05-27T03:22:25.135583790Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=26011398" May 27 03:22:25.138172 containerd[1720]: time="2025-05-27T03:22:25.138129212Z" level=info msg="ImageCreate event name:\"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:25.147652 containerd[1720]: time="2025-05-27T03:22:25.147534337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:25.150067 containerd[1720]: time="2025-05-27T03:22:25.149050317Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"27638910\" in 1.789079713s" May 27 03:22:25.150067 containerd[1720]: time="2025-05-27T03:22:25.149085499Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\"" May 27 03:22:25.150067 containerd[1720]: time="2025-05-27T03:22:25.149742926Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 27 03:22:25.158625 kubelet[2435]: E0527 03:22:25.158595 2435 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:22:25.160356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:22:25.160481 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:22:25.160889 systemd[1]: kubelet.service: Consumed 145ms CPU time, 110.9M memory peak. May 27 03:22:26.390271 containerd[1720]: time="2025-05-27T03:22:26.390215903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:26.392446 containerd[1720]: time="2025-05-27T03:22:26.392411470Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=20148968" May 27 03:22:26.394901 containerd[1720]: time="2025-05-27T03:22:26.394849512Z" level=info msg="ImageCreate event name:\"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:26.399362 containerd[1720]: time="2025-05-27T03:22:26.399312168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:26.400092 containerd[1720]: time="2025-05-27T03:22:26.399927565Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"21776498\" in 1.250160776s" May 27 03:22:26.400092 containerd[1720]: time="2025-05-27T03:22:26.399959144Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\"" May 27 03:22:26.400669 containerd[1720]: time="2025-05-27T03:22:26.400647314Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 27 03:22:27.230368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4278904064.mount: Deactivated successfully. May 27 03:22:27.549692 containerd[1720]: time="2025-05-27T03:22:27.549590108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:27.551858 containerd[1720]: time="2025-05-27T03:22:27.551823740Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=31889083" May 27 03:22:27.554445 containerd[1720]: time="2025-05-27T03:22:27.554407387Z" level=info msg="ImageCreate event name:\"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:27.557279 containerd[1720]: time="2025-05-27T03:22:27.557241766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:27.557664 containerd[1720]: time="2025-05-27T03:22:27.557517307Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"31888094\" in 1.156836739s" May 27 03:22:27.557664 containerd[1720]: time="2025-05-27T03:22:27.557549823Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 27 03:22:27.558139 containerd[1720]: time="2025-05-27T03:22:27.558113947Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 27 03:22:28.103912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3716144252.mount: Deactivated successfully. May 27 03:22:28.840406 containerd[1720]: time="2025-05-27T03:22:28.840359664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:28.842389 containerd[1720]: time="2025-05-27T03:22:28.842356056Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" May 27 03:22:28.844779 containerd[1720]: time="2025-05-27T03:22:28.844742846Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:28.848057 containerd[1720]: time="2025-05-27T03:22:28.848020001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:28.848758 containerd[1720]: time="2025-05-27T03:22:28.848573357Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.290432148s" May 27 03:22:28.848758 containerd[1720]: time="2025-05-27T03:22:28.848606208Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" May 27 03:22:28.849328 containerd[1720]: time="2025-05-27T03:22:28.849306042Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 03:22:29.334126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4174302046.mount: Deactivated successfully. May 27 03:22:29.349157 containerd[1720]: time="2025-05-27T03:22:29.349122060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:22:29.351169 containerd[1720]: time="2025-05-27T03:22:29.351141056Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 27 03:22:29.354273 containerd[1720]: time="2025-05-27T03:22:29.354237842Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:22:29.360923 containerd[1720]: time="2025-05-27T03:22:29.359419510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:22:29.360923 containerd[1720]: time="2025-05-27T03:22:29.360135779Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 510.80574ms" May 27 03:22:29.360923 containerd[1720]: time="2025-05-27T03:22:29.360163340Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 03:22:29.361517 containerd[1720]: time="2025-05-27T03:22:29.361480806Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 27 03:22:31.156519 containerd[1720]: time="2025-05-27T03:22:31.156462033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:31.158722 containerd[1720]: time="2025-05-27T03:22:31.158685839Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142747" May 27 03:22:31.161351 containerd[1720]: time="2025-05-27T03:22:31.161312841Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:31.164925 containerd[1720]: time="2025-05-27T03:22:31.164858560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:31.165691 containerd[1720]: time="2025-05-27T03:22:31.165489746Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 1.803918821s" May 27 03:22:31.165691 containerd[1720]: time="2025-05-27T03:22:31.165521610Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 27 03:22:33.150148 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:22:33.150664 systemd[1]: kubelet.service: Consumed 145ms CPU time, 110.9M memory peak. May 27 03:22:33.152728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:22:33.176365 systemd[1]: Reload requested from client PID 2549 ('systemctl') (unit session-9.scope)... May 27 03:22:33.176382 systemd[1]: Reloading... May 27 03:22:33.267904 zram_generator::config[2590]: No configuration found. May 27 03:22:33.346113 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:22:33.435783 systemd[1]: Reloading finished in 259 ms. May 27 03:22:33.537715 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 03:22:33.537792 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 03:22:33.538065 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:22:33.538115 systemd[1]: kubelet.service: Consumed 83ms CPU time, 83.4M memory peak. May 27 03:22:33.539513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:22:34.007756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:22:34.010947 (kubelet)[2661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:22:34.046967 kubelet[2661]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:22:34.047204 kubelet[2661]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:22:34.047204 kubelet[2661]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:22:34.047302 kubelet[2661]: I0527 03:22:34.047273 2661 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:22:34.200308 kubelet[2661]: I0527 03:22:34.200285 2661 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 03:22:34.200308 kubelet[2661]: I0527 03:22:34.200303 2661 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:22:34.200547 kubelet[2661]: I0527 03:22:34.200535 2661 server.go:956] "Client rotation is on, will bootstrap in background" May 27 03:22:34.231972 kubelet[2661]: I0527 03:22:34.231949 2661 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:22:34.234903 kubelet[2661]: E0527 03:22:34.233738 2661 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 27 03:22:34.241469 kubelet[2661]: I0527 03:22:34.241449 2661 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:22:34.244968 kubelet[2661]: I0527 03:22:34.244952 2661 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:22:34.245214 kubelet[2661]: I0527 03:22:34.245186 2661 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:22:34.245360 kubelet[2661]: I0527 03:22:34.245212 2661 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.0.0-a-98ca04e8ee","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:22:34.245486 kubelet[2661]: I0527 03:22:34.245363 2661 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:22:34.245486 kubelet[2661]: I0527 03:22:34.245373 2661 container_manager_linux.go:303] "Creating device plugin manager" May 27 03:22:34.246218 kubelet[2661]: I0527 03:22:34.246204 2661 state_mem.go:36] "Initialized new in-memory state store" May 27 03:22:34.250177 kubelet[2661]: I0527 03:22:34.250060 2661 kubelet.go:480] "Attempting to sync node with API server" May 27 03:22:34.250177 kubelet[2661]: I0527 03:22:34.250091 2661 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:22:34.250177 kubelet[2661]: I0527 03:22:34.250118 2661 kubelet.go:386] "Adding apiserver pod source" May 27 03:22:34.251676 kubelet[2661]: I0527 03:22:34.251556 2661 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:22:34.255827 kubelet[2661]: E0527 03:22:34.255800 2661 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.0.0-a-98ca04e8ee&limit=500&resourceVersion=0\": dial tcp 10.200.8.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 03:22:34.256173 kubelet[2661]: E0527 03:22:34.256145 2661 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 03:22:34.256495 kubelet[2661]: I0527 03:22:34.256480 2661 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:22:34.256937 kubelet[2661]: I0527 03:22:34.256924 2661 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 03:22:34.257623 kubelet[2661]: W0527 03:22:34.257601 2661 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 03:22:34.260163 kubelet[2661]: I0527 03:22:34.259920 2661 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:22:34.260163 kubelet[2661]: I0527 03:22:34.259966 2661 server.go:1289] "Started kubelet" May 27 03:22:34.263189 kubelet[2661]: I0527 03:22:34.263153 2661 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:22:34.264073 kubelet[2661]: I0527 03:22:34.263988 2661 server.go:317] "Adding debug handlers to kubelet server" May 27 03:22:34.265050 kubelet[2661]: I0527 03:22:34.264998 2661 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:22:34.265395 kubelet[2661]: I0527 03:22:34.265379 2661 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:22:34.266652 kubelet[2661]: E0527 03:22:34.265541 2661 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.16:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.16:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.0.0-a-98ca04e8ee.1843444781bd9777 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.0.0-a-98ca04e8ee,UID:ci-4344.0.0-a-98ca04e8ee,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.0.0-a-98ca04e8ee,},FirstTimestamp:2025-05-27 03:22:34.259937143 +0000 UTC m=+0.245370969,LastTimestamp:2025-05-27 03:22:34.259937143 +0000 UTC m=+0.245370969,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.0.0-a-98ca04e8ee,}" May 27 03:22:34.267852 kubelet[2661]: I0527 03:22:34.267838 2661 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:22:34.267996 kubelet[2661]: I0527 03:22:34.267987 2661 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:22:34.268076 kubelet[2661]: I0527 03:22:34.268062 2661 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:22:34.270109 kubelet[2661]: I0527 03:22:34.270089 2661 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:22:34.270168 kubelet[2661]: I0527 03:22:34.270146 2661 reconciler.go:26] "Reconciler: start to sync state" May 27 03:22:34.270901 kubelet[2661]: E0527 03:22:34.270651 2661 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 03:22:34.271949 kubelet[2661]: E0527 03:22:34.271931 2661 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:22:34.272365 kubelet[2661]: E0527 03:22:34.272347 2661 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.0.0-a-98ca04e8ee\" not found" May 27 03:22:34.272747 kubelet[2661]: I0527 03:22:34.272729 2661 factory.go:223] Registration of the systemd container factory successfully May 27 03:22:34.272821 kubelet[2661]: I0527 03:22:34.272808 2661 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:22:34.273187 kubelet[2661]: E0527 03:22:34.273165 2661 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-a-98ca04e8ee?timeout=10s\": dial tcp 10.200.8.16:6443: connect: connection refused" interval="200ms" May 27 03:22:34.274152 kubelet[2661]: I0527 03:22:34.273962 2661 factory.go:223] Registration of the containerd container factory successfully May 27 03:22:34.302509 kubelet[2661]: I0527 03:22:34.302495 2661 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:22:34.302509 kubelet[2661]: I0527 03:22:34.302506 2661 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:22:34.302594 kubelet[2661]: I0527 03:22:34.302520 2661 state_mem.go:36] "Initialized new in-memory state store" May 27 03:22:34.307339 kubelet[2661]: I0527 03:22:34.307327 2661 policy_none.go:49] "None policy: Start" May 27 03:22:34.307388 kubelet[2661]: I0527 03:22:34.307343 2661 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:22:34.307388 kubelet[2661]: I0527 03:22:34.307353 2661 state_mem.go:35] "Initializing new in-memory state store" May 27 03:22:34.314268 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 03:22:34.327759 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 03:22:34.331810 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 03:22:34.332658 kubelet[2661]: I0527 03:22:34.332570 2661 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 03:22:34.333636 kubelet[2661]: I0527 03:22:34.333623 2661 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 03:22:34.333887 kubelet[2661]: I0527 03:22:34.333691 2661 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 03:22:34.333887 kubelet[2661]: I0527 03:22:34.333707 2661 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:22:34.333887 kubelet[2661]: I0527 03:22:34.333713 2661 kubelet.go:2436] "Starting kubelet main sync loop" May 27 03:22:34.333887 kubelet[2661]: E0527 03:22:34.333738 2661 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:22:34.339838 kubelet[2661]: E0527 03:22:34.339820 2661 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 03:22:34.344385 kubelet[2661]: E0527 03:22:34.344370 2661 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 03:22:34.344526 kubelet[2661]: I0527 03:22:34.344514 2661 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:22:34.344556 kubelet[2661]: I0527 03:22:34.344527 2661 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:22:34.344968 kubelet[2661]: I0527 03:22:34.344949 2661 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:22:34.346121 kubelet[2661]: E0527 03:22:34.346104 2661 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:22:34.346300 kubelet[2661]: E0527 03:22:34.346139 2661 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.0.0-a-98ca04e8ee\" not found" May 27 03:22:34.445116 systemd[1]: Created slice kubepods-burstable-pod13e19515f9fe0fce5b75343ac3937a76.slice - libcontainer container kubepods-burstable-pod13e19515f9fe0fce5b75343ac3937a76.slice. May 27 03:22:34.446699 kubelet[2661]: I0527 03:22:34.446558 2661 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-98ca04e8ee" May 27 03:22:34.446977 kubelet[2661]: E0527 03:22:34.446960 2661 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.16:6443/api/v1/nodes\": dial tcp 10.200.8.16:6443: connect: connection refused" node="ci-4344.0.0-a-98ca04e8ee" May 27 03:22:34.451623 kubelet[2661]: E0527 03:22:34.451607 2661 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-98ca04e8ee\" not found" node="ci-4344.0.0-a-98ca04e8ee" May 27 03:22:34.454851 systemd[1]: Created slice kubepods-burstable-pod860ba2702ccdc5931761863d47e78798.slice - libcontainer container kubepods-burstable-pod860ba2702ccdc5931761863d47e78798.slice. May 27 03:22:34.461748 kubelet[2661]: E0527 03:22:34.461619 2661 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-98ca04e8ee\" not found" node="ci-4344.0.0-a-98ca04e8ee" May 27 03:22:34.463473 systemd[1]: Created slice kubepods-burstable-pod9e3b09c38194b080617b0f6c0a6bc714.slice - libcontainer container kubepods-burstable-pod9e3b09c38194b080617b0f6c0a6bc714.slice. May 27 03:22:34.464739 kubelet[2661]: E0527 03:22:34.464722 2661 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-98ca04e8ee\" not found" node="ci-4344.0.0-a-98ca04e8ee" May 27 03:22:34.474359 kubelet[2661]: E0527 03:22:34.474342 2661 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-a-98ca04e8ee?timeout=10s\": dial tcp 10.200.8.16:6443: connect: connection refused" interval="400ms" May 27 03:22:34.571375 kubelet[2661]: I0527 03:22:34.571301 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9e3b09c38194b080617b0f6c0a6bc714-kubeconfig\") pod \"kube-scheduler-ci-4344.0.0-a-98ca04e8ee\" (UID: \"9e3b09c38194b080617b0f6c0a6bc714\") " pod="kube-system/kube-scheduler-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:34.571375 kubelet[2661]: I0527 03:22:34.571328 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/13e19515f9fe0fce5b75343ac3937a76-k8s-certs\") pod \"kube-apiserver-ci-4344.0.0-a-98ca04e8ee\" (UID: \"13e19515f9fe0fce5b75343ac3937a76\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:34.571375 kubelet[2661]: I0527 03:22:34.571346 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/13e19515f9fe0fce5b75343ac3937a76-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.0.0-a-98ca04e8ee\" (UID: \"13e19515f9fe0fce5b75343ac3937a76\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:34.571375 kubelet[2661]: I0527 03:22:34.571363 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/860ba2702ccdc5931761863d47e78798-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.0.0-a-98ca04e8ee\" (UID: \"860ba2702ccdc5931761863d47e78798\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:34.571624 kubelet[2661]: I0527 03:22:34.571381 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/860ba2702ccdc5931761863d47e78798-kubeconfig\") pod \"kube-controller-manager-ci-4344.0.0-a-98ca04e8ee\" (UID: \"860ba2702ccdc5931761863d47e78798\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:34.571624 kubelet[2661]: I0527 03:22:34.571399 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/860ba2702ccdc5931761863d47e78798-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.0.0-a-98ca04e8ee\" (UID: \"860ba2702ccdc5931761863d47e78798\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:34.571624 kubelet[2661]: I0527 03:22:34.571419 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/13e19515f9fe0fce5b75343ac3937a76-ca-certs\") pod \"kube-apiserver-ci-4344.0.0-a-98ca04e8ee\" (UID: \"13e19515f9fe0fce5b75343ac3937a76\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:34.571624 kubelet[2661]: I0527 03:22:34.571432 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/860ba2702ccdc5931761863d47e78798-ca-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-98ca04e8ee\" (UID: \"860ba2702ccdc5931761863d47e78798\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:34.571624 kubelet[2661]: I0527 03:22:34.571447 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/860ba2702ccdc5931761863d47e78798-k8s-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-98ca04e8ee\" (UID: \"860ba2702ccdc5931761863d47e78798\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:34.648917 kubelet[2661]: I0527 03:22:34.648867 2661 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-98ca04e8ee" May 27 03:22:34.649259 kubelet[2661]: E0527 03:22:34.649234 2661 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.16:6443/api/v1/nodes\": dial tcp 10.200.8.16:6443: connect: connection refused" node="ci-4344.0.0-a-98ca04e8ee" May 27 03:22:34.752868 containerd[1720]: time="2025-05-27T03:22:34.752826684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.0.0-a-98ca04e8ee,Uid:13e19515f9fe0fce5b75343ac3937a76,Namespace:kube-system,Attempt:0,}" May 27 03:22:34.762582 containerd[1720]: time="2025-05-27T03:22:34.762554067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.0.0-a-98ca04e8ee,Uid:860ba2702ccdc5931761863d47e78798,Namespace:kube-system,Attempt:0,}" May 27 03:22:34.767585 containerd[1720]: time="2025-05-27T03:22:34.767559418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.0.0-a-98ca04e8ee,Uid:9e3b09c38194b080617b0f6c0a6bc714,Namespace:kube-system,Attempt:0,}" May 27 03:22:34.821848 containerd[1720]: time="2025-05-27T03:22:34.821171131Z" level=info msg="connecting to shim b4613c438fca4a0be21dfd10b7236faea2d924d983597eeab0f8175f4cb8aa08" address="unix:///run/containerd/s/6375674456b62e73d3d1ca7923c58909ce4ff882d488278ed8f9bb67eaaf28dc" namespace=k8s.io protocol=ttrpc version=3 May 27 03:22:34.850032 containerd[1720]: time="2025-05-27T03:22:34.849994106Z" level=info msg="connecting to shim fd0d75f1130dc4fa4c25a48ca0bf9bff0b4242048d9ed9df78c5eb22e5ea1aa0" address="unix:///run/containerd/s/70ae0be10233b7276d3d164f4ceae532b560b88e0b9d54860d2e40dfedfd0c0b" namespace=k8s.io protocol=ttrpc version=3 May 27 03:22:34.852105 systemd[1]: Started cri-containerd-b4613c438fca4a0be21dfd10b7236faea2d924d983597eeab0f8175f4cb8aa08.scope - libcontainer container b4613c438fca4a0be21dfd10b7236faea2d924d983597eeab0f8175f4cb8aa08. May 27 03:22:34.857352 containerd[1720]: time="2025-05-27T03:22:34.857323197Z" level=info msg="connecting to shim 7423d91b246b9a5920d3b5cfec541ee99df59a66c1140ac389b9e414afc635fe" address="unix:///run/containerd/s/48fe8857f99b2f74f2a38882b2e62d5994dfc1539732fca62bb7ecad47245041" namespace=k8s.io protocol=ttrpc version=3 May 27 03:22:34.875352 kubelet[2661]: E0527 03:22:34.874892 2661 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-a-98ca04e8ee?timeout=10s\": dial tcp 10.200.8.16:6443: connect: connection refused" interval="800ms" May 27 03:22:34.890071 systemd[1]: Started cri-containerd-fd0d75f1130dc4fa4c25a48ca0bf9bff0b4242048d9ed9df78c5eb22e5ea1aa0.scope - libcontainer container fd0d75f1130dc4fa4c25a48ca0bf9bff0b4242048d9ed9df78c5eb22e5ea1aa0. May 27 03:22:34.893890 systemd[1]: Started cri-containerd-7423d91b246b9a5920d3b5cfec541ee99df59a66c1140ac389b9e414afc635fe.scope - libcontainer container 7423d91b246b9a5920d3b5cfec541ee99df59a66c1140ac389b9e414afc635fe. May 27 03:22:34.920837 containerd[1720]: time="2025-05-27T03:22:34.920810574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.0.0-a-98ca04e8ee,Uid:13e19515f9fe0fce5b75343ac3937a76,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4613c438fca4a0be21dfd10b7236faea2d924d983597eeab0f8175f4cb8aa08\"" May 27 03:22:34.928891 containerd[1720]: time="2025-05-27T03:22:34.928855563Z" level=info msg="CreateContainer within sandbox \"b4613c438fca4a0be21dfd10b7236faea2d924d983597eeab0f8175f4cb8aa08\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 03:22:34.945007 containerd[1720]: time="2025-05-27T03:22:34.944868262Z" level=info msg="Container a8ff7bec2eeac621d3c2fed6ce9a66515f4b2353cdac8a69ff6d964c5215ecd0: CDI devices from CRI Config.CDIDevices: []" May 27 03:22:34.958768 containerd[1720]: time="2025-05-27T03:22:34.958735711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.0.0-a-98ca04e8ee,Uid:860ba2702ccdc5931761863d47e78798,Namespace:kube-system,Attempt:0,} returns sandbox id \"7423d91b246b9a5920d3b5cfec541ee99df59a66c1140ac389b9e414afc635fe\"" May 27 03:22:34.959474 containerd[1720]: time="2025-05-27T03:22:34.959446738Z" level=info msg="CreateContainer within sandbox \"b4613c438fca4a0be21dfd10b7236faea2d924d983597eeab0f8175f4cb8aa08\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a8ff7bec2eeac621d3c2fed6ce9a66515f4b2353cdac8a69ff6d964c5215ecd0\"" May 27 03:22:34.960399 containerd[1720]: time="2025-05-27T03:22:34.959919084Z" level=info msg="StartContainer for \"a8ff7bec2eeac621d3c2fed6ce9a66515f4b2353cdac8a69ff6d964c5215ecd0\"" May 27 03:22:34.960873 containerd[1720]: time="2025-05-27T03:22:34.960846042Z" level=info msg="connecting to shim a8ff7bec2eeac621d3c2fed6ce9a66515f4b2353cdac8a69ff6d964c5215ecd0" address="unix:///run/containerd/s/6375674456b62e73d3d1ca7923c58909ce4ff882d488278ed8f9bb67eaaf28dc" protocol=ttrpc version=3 May 27 03:22:34.961937 containerd[1720]: time="2025-05-27T03:22:34.961915208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.0.0-a-98ca04e8ee,Uid:9e3b09c38194b080617b0f6c0a6bc714,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd0d75f1130dc4fa4c25a48ca0bf9bff0b4242048d9ed9df78c5eb22e5ea1aa0\"" May 27 03:22:34.965530 containerd[1720]: time="2025-05-27T03:22:34.965507165Z" level=info msg="CreateContainer within sandbox \"7423d91b246b9a5920d3b5cfec541ee99df59a66c1140ac389b9e414afc635fe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 03:22:34.969309 containerd[1720]: time="2025-05-27T03:22:34.969284449Z" level=info msg="CreateContainer within sandbox \"fd0d75f1130dc4fa4c25a48ca0bf9bff0b4242048d9ed9df78c5eb22e5ea1aa0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 03:22:34.977999 systemd[1]: Started cri-containerd-a8ff7bec2eeac621d3c2fed6ce9a66515f4b2353cdac8a69ff6d964c5215ecd0.scope - libcontainer container a8ff7bec2eeac621d3c2fed6ce9a66515f4b2353cdac8a69ff6d964c5215ecd0. May 27 03:22:34.981339 containerd[1720]: time="2025-05-27T03:22:34.981314123Z" level=info msg="Container 849f17f231bf0ab3aebeeef3c6ca6ea9efd9ecb4b1772785372da9c0bf009403: CDI devices from CRI Config.CDIDevices: []" May 27 03:22:34.993094 containerd[1720]: time="2025-05-27T03:22:34.993069373Z" level=info msg="CreateContainer within sandbox \"7423d91b246b9a5920d3b5cfec541ee99df59a66c1140ac389b9e414afc635fe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"849f17f231bf0ab3aebeeef3c6ca6ea9efd9ecb4b1772785372da9c0bf009403\"" May 27 03:22:34.993414 containerd[1720]: time="2025-05-27T03:22:34.993397447Z" level=info msg="StartContainer for \"849f17f231bf0ab3aebeeef3c6ca6ea9efd9ecb4b1772785372da9c0bf009403\"" May 27 03:22:34.994267 containerd[1720]: time="2025-05-27T03:22:34.994241162Z" level=info msg="connecting to shim 849f17f231bf0ab3aebeeef3c6ca6ea9efd9ecb4b1772785372da9c0bf009403" address="unix:///run/containerd/s/48fe8857f99b2f74f2a38882b2e62d5994dfc1539732fca62bb7ecad47245041" protocol=ttrpc version=3 May 27 03:22:34.996900 containerd[1720]: time="2025-05-27T03:22:34.996127772Z" level=info msg="Container 12fcac1b2c32e237c92335c686f9180df7cc045b0b5173b675e92bcfb61cb6c8: CDI devices from CRI Config.CDIDevices: []" May 27 03:22:35.011965 systemd[1]: Started cri-containerd-849f17f231bf0ab3aebeeef3c6ca6ea9efd9ecb4b1772785372da9c0bf009403.scope - libcontainer container 849f17f231bf0ab3aebeeef3c6ca6ea9efd9ecb4b1772785372da9c0bf009403. May 27 03:22:35.012829 containerd[1720]: time="2025-05-27T03:22:35.012811358Z" level=info msg="CreateContainer within sandbox \"fd0d75f1130dc4fa4c25a48ca0bf9bff0b4242048d9ed9df78c5eb22e5ea1aa0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"12fcac1b2c32e237c92335c686f9180df7cc045b0b5173b675e92bcfb61cb6c8\"" May 27 03:22:35.013560 containerd[1720]: time="2025-05-27T03:22:35.013299225Z" level=info msg="StartContainer for \"12fcac1b2c32e237c92335c686f9180df7cc045b0b5173b675e92bcfb61cb6c8\"" May 27 03:22:35.015977 containerd[1720]: time="2025-05-27T03:22:35.015942453Z" level=info msg="connecting to shim 12fcac1b2c32e237c92335c686f9180df7cc045b0b5173b675e92bcfb61cb6c8" address="unix:///run/containerd/s/70ae0be10233b7276d3d164f4ceae532b560b88e0b9d54860d2e40dfedfd0c0b" protocol=ttrpc version=3 May 27 03:22:35.030890 containerd[1720]: time="2025-05-27T03:22:35.030755820Z" level=info msg="StartContainer for \"a8ff7bec2eeac621d3c2fed6ce9a66515f4b2353cdac8a69ff6d964c5215ecd0\" returns successfully" May 27 03:22:35.038004 systemd[1]: Started cri-containerd-12fcac1b2c32e237c92335c686f9180df7cc045b0b5173b675e92bcfb61cb6c8.scope - libcontainer container 12fcac1b2c32e237c92335c686f9180df7cc045b0b5173b675e92bcfb61cb6c8. May 27 03:22:35.051426 kubelet[2661]: I0527 03:22:35.051410 2661 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-98ca04e8ee" May 27 03:22:35.051986 kubelet[2661]: E0527 03:22:35.051970 2661 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.16:6443/api/v1/nodes\": dial tcp 10.200.8.16:6443: connect: connection refused" node="ci-4344.0.0-a-98ca04e8ee" May 27 03:22:35.080121 containerd[1720]: time="2025-05-27T03:22:35.080057259Z" level=info msg="StartContainer for \"849f17f231bf0ab3aebeeef3c6ca6ea9efd9ecb4b1772785372da9c0bf009403\" returns successfully" May 27 03:22:35.144626 containerd[1720]: time="2025-05-27T03:22:35.144604131Z" level=info msg="StartContainer for \"12fcac1b2c32e237c92335c686f9180df7cc045b0b5173b675e92bcfb61cb6c8\" returns successfully" May 27 03:22:35.355574 kubelet[2661]: E0527 03:22:35.355550 2661 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-98ca04e8ee\" not found" node="ci-4344.0.0-a-98ca04e8ee" May 27 03:22:35.360498 kubelet[2661]: E0527 03:22:35.360480 2661 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-98ca04e8ee\" not found" node="ci-4344.0.0-a-98ca04e8ee" May 27 03:22:35.367284 kubelet[2661]: E0527 03:22:35.367265 2661 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-98ca04e8ee\" not found" node="ci-4344.0.0-a-98ca04e8ee" May 27 03:22:35.834027 update_engine[1705]: I20250527 03:22:35.833921 1705 update_attempter.cc:509] Updating boot flags... May 27 03:22:35.857653 kubelet[2661]: I0527 03:22:35.857552 2661 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-98ca04e8ee" May 27 03:22:36.368895 kubelet[2661]: E0527 03:22:36.368851 2661 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-98ca04e8ee\" not found" node="ci-4344.0.0-a-98ca04e8ee" May 27 03:22:36.369785 kubelet[2661]: E0527 03:22:36.369527 2661 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-98ca04e8ee\" not found" node="ci-4344.0.0-a-98ca04e8ee" May 27 03:22:37.047144 kubelet[2661]: E0527 03:22:37.047091 2661 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.0.0-a-98ca04e8ee\" not found" node="ci-4344.0.0-a-98ca04e8ee" May 27 03:22:37.120081 kubelet[2661]: I0527 03:22:37.120027 2661 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.0.0-a-98ca04e8ee" May 27 03:22:37.174107 kubelet[2661]: I0527 03:22:37.174072 2661 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:37.231306 kubelet[2661]: E0527 03:22:37.231265 2661 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.0.0-a-98ca04e8ee\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:37.232068 kubelet[2661]: I0527 03:22:37.231397 2661 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:37.235137 kubelet[2661]: E0527 03:22:37.235106 2661 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.0.0-a-98ca04e8ee\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:37.235270 kubelet[2661]: I0527 03:22:37.235261 2661 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:37.236954 kubelet[2661]: E0527 03:22:37.236917 2661 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.0.0-a-98ca04e8ee\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:37.257457 kubelet[2661]: I0527 03:22:37.257283 2661 apiserver.go:52] "Watching apiserver" May 27 03:22:37.270448 kubelet[2661]: I0527 03:22:37.270425 2661 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:22:38.131615 kernel: hv_balloon: Max. dynamic memory size: 8192 MB May 27 03:22:38.958678 systemd[1]: Reload requested from client PID 2960 ('systemctl') (unit session-9.scope)... May 27 03:22:38.958694 systemd[1]: Reloading... May 27 03:22:39.041914 zram_generator::config[3008]: No configuration found. May 27 03:22:39.109633 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:22:39.201060 systemd[1]: Reloading finished in 242 ms. May 27 03:22:39.229790 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:22:39.246692 systemd[1]: kubelet.service: Deactivated successfully. May 27 03:22:39.246935 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:22:39.246981 systemd[1]: kubelet.service: Consumed 538ms CPU time, 129.5M memory peak. May 27 03:22:39.248352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:22:39.620725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:22:39.627181 (kubelet)[3072]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:22:39.664199 kubelet[3072]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:22:39.664199 kubelet[3072]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:22:39.664199 kubelet[3072]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:22:39.664459 kubelet[3072]: I0527 03:22:39.664264 3072 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:22:39.672659 kubelet[3072]: I0527 03:22:39.672605 3072 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 03:22:39.672659 kubelet[3072]: I0527 03:22:39.672625 3072 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:22:39.672885 kubelet[3072]: I0527 03:22:39.672823 3072 server.go:956] "Client rotation is on, will bootstrap in background" May 27 03:22:39.673976 kubelet[3072]: I0527 03:22:39.673959 3072 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 27 03:22:39.676365 kubelet[3072]: I0527 03:22:39.675804 3072 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:22:39.679780 kubelet[3072]: I0527 03:22:39.679766 3072 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:22:39.682516 kubelet[3072]: I0527 03:22:39.682495 3072 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:22:39.682746 kubelet[3072]: I0527 03:22:39.682723 3072 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:22:39.683952 kubelet[3072]: I0527 03:22:39.682749 3072 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.0.0-a-98ca04e8ee","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:22:39.684279 kubelet[3072]: I0527 03:22:39.684126 3072 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:22:39.684279 kubelet[3072]: I0527 03:22:39.684141 3072 container_manager_linux.go:303] "Creating device plugin manager" May 27 03:22:39.684279 kubelet[3072]: I0527 03:22:39.684203 3072 state_mem.go:36] "Initialized new in-memory state store" May 27 03:22:39.684469 kubelet[3072]: I0527 03:22:39.684463 3072 kubelet.go:480] "Attempting to sync node with API server" May 27 03:22:39.684530 kubelet[3072]: I0527 03:22:39.684524 3072 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:22:39.684591 kubelet[3072]: I0527 03:22:39.684588 3072 kubelet.go:386] "Adding apiserver pod source" May 27 03:22:39.684633 kubelet[3072]: I0527 03:22:39.684629 3072 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:22:39.689817 kubelet[3072]: I0527 03:22:39.688024 3072 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:22:39.689817 kubelet[3072]: I0527 03:22:39.688438 3072 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 03:22:39.690689 kubelet[3072]: I0527 03:22:39.690368 3072 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:22:39.690689 kubelet[3072]: I0527 03:22:39.690408 3072 server.go:1289] "Started kubelet" May 27 03:22:39.693344 kubelet[3072]: I0527 03:22:39.691957 3072 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:22:39.696979 kubelet[3072]: I0527 03:22:39.696952 3072 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 03:22:39.699801 kubelet[3072]: I0527 03:22:39.699531 3072 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:22:39.700289 kubelet[3072]: I0527 03:22:39.700277 3072 server.go:317] "Adding debug handlers to kubelet server" May 27 03:22:39.703021 kubelet[3072]: I0527 03:22:39.702971 3072 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:22:39.703336 kubelet[3072]: I0527 03:22:39.703151 3072 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:22:39.703336 kubelet[3072]: I0527 03:22:39.703328 3072 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:22:39.705514 kubelet[3072]: I0527 03:22:39.704960 3072 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:22:39.705514 kubelet[3072]: E0527 03:22:39.705114 3072 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.0.0-a-98ca04e8ee\" not found" May 27 03:22:39.709031 kubelet[3072]: I0527 03:22:39.709016 3072 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:22:39.709176 kubelet[3072]: I0527 03:22:39.709105 3072 reconciler.go:26] "Reconciler: start to sync state" May 27 03:22:39.710800 kubelet[3072]: I0527 03:22:39.710696 3072 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 03:22:39.710800 kubelet[3072]: I0527 03:22:39.710712 3072 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 03:22:39.710800 kubelet[3072]: I0527 03:22:39.710727 3072 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:22:39.710800 kubelet[3072]: I0527 03:22:39.710734 3072 kubelet.go:2436] "Starting kubelet main sync loop" May 27 03:22:39.710800 kubelet[3072]: E0527 03:22:39.710767 3072 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:22:39.716590 kubelet[3072]: I0527 03:22:39.716161 3072 factory.go:223] Registration of the containerd container factory successfully May 27 03:22:39.716590 kubelet[3072]: I0527 03:22:39.716178 3072 factory.go:223] Registration of the systemd container factory successfully May 27 03:22:39.716590 kubelet[3072]: I0527 03:22:39.716241 3072 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:22:39.761085 kubelet[3072]: I0527 03:22:39.761076 3072 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:22:39.761198 kubelet[3072]: I0527 03:22:39.761175 3072 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:22:39.761230 kubelet[3072]: I0527 03:22:39.761227 3072 state_mem.go:36] "Initialized new in-memory state store" May 27 03:22:39.761337 kubelet[3072]: I0527 03:22:39.761332 3072 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 03:22:39.761384 kubelet[3072]: I0527 03:22:39.761364 3072 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 03:22:39.761408 kubelet[3072]: I0527 03:22:39.761385 3072 policy_none.go:49] "None policy: Start" May 27 03:22:39.761408 kubelet[3072]: I0527 03:22:39.761396 3072 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:22:39.761408 kubelet[3072]: I0527 03:22:39.761407 3072 state_mem.go:35] "Initializing new in-memory state store" May 27 03:22:39.761510 kubelet[3072]: I0527 03:22:39.761501 3072 state_mem.go:75] "Updated machine memory state" May 27 03:22:39.764283 kubelet[3072]: E0527 03:22:39.764268 3072 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 03:22:39.764822 kubelet[3072]: I0527 03:22:39.764388 3072 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:22:39.764822 kubelet[3072]: I0527 03:22:39.764399 3072 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:22:39.764822 kubelet[3072]: I0527 03:22:39.764534 3072 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:22:39.765749 kubelet[3072]: E0527 03:22:39.765734 3072 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:22:39.812013 kubelet[3072]: I0527 03:22:39.811990 3072 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:39.812117 kubelet[3072]: I0527 03:22:39.812106 3072 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:39.812195 kubelet[3072]: I0527 03:22:39.812010 3072 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:39.818116 kubelet[3072]: I0527 03:22:39.818103 3072 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 03:22:39.818323 kubelet[3072]: I0527 03:22:39.818249 3072 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 03:22:39.820638 kubelet[3072]: I0527 03:22:39.820612 3072 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 03:22:39.866303 kubelet[3072]: I0527 03:22:39.866274 3072 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-98ca04e8ee" May 27 03:22:39.874019 kubelet[3072]: I0527 03:22:39.873952 3072 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.0.0-a-98ca04e8ee" May 27 03:22:39.874019 kubelet[3072]: I0527 03:22:39.873997 3072 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.0.0-a-98ca04e8ee" May 27 03:22:39.910033 kubelet[3072]: I0527 03:22:39.910012 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/13e19515f9fe0fce5b75343ac3937a76-ca-certs\") pod \"kube-apiserver-ci-4344.0.0-a-98ca04e8ee\" (UID: \"13e19515f9fe0fce5b75343ac3937a76\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:39.910166 kubelet[3072]: I0527 03:22:39.910039 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/13e19515f9fe0fce5b75343ac3937a76-k8s-certs\") pod \"kube-apiserver-ci-4344.0.0-a-98ca04e8ee\" (UID: \"13e19515f9fe0fce5b75343ac3937a76\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:39.910166 kubelet[3072]: I0527 03:22:39.910156 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/13e19515f9fe0fce5b75343ac3937a76-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.0.0-a-98ca04e8ee\" (UID: \"13e19515f9fe0fce5b75343ac3937a76\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:39.910221 kubelet[3072]: I0527 03:22:39.910176 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/860ba2702ccdc5931761863d47e78798-ca-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-98ca04e8ee\" (UID: \"860ba2702ccdc5931761863d47e78798\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:39.910221 kubelet[3072]: I0527 03:22:39.910191 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/860ba2702ccdc5931761863d47e78798-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.0.0-a-98ca04e8ee\" (UID: \"860ba2702ccdc5931761863d47e78798\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:40.010998 kubelet[3072]: I0527 03:22:40.010949 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/860ba2702ccdc5931761863d47e78798-kubeconfig\") pod \"kube-controller-manager-ci-4344.0.0-a-98ca04e8ee\" (UID: \"860ba2702ccdc5931761863d47e78798\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:40.010998 kubelet[3072]: I0527 03:22:40.010978 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/860ba2702ccdc5931761863d47e78798-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.0.0-a-98ca04e8ee\" (UID: \"860ba2702ccdc5931761863d47e78798\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:40.011182 kubelet[3072]: I0527 03:22:40.011151 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/860ba2702ccdc5931761863d47e78798-k8s-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-98ca04e8ee\" (UID: \"860ba2702ccdc5931761863d47e78798\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:40.011295 kubelet[3072]: I0527 03:22:40.011223 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9e3b09c38194b080617b0f6c0a6bc714-kubeconfig\") pod \"kube-scheduler-ci-4344.0.0-a-98ca04e8ee\" (UID: \"9e3b09c38194b080617b0f6c0a6bc714\") " pod="kube-system/kube-scheduler-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:40.687066 kubelet[3072]: I0527 03:22:40.687039 3072 apiserver.go:52] "Watching apiserver" May 27 03:22:40.709404 kubelet[3072]: I0527 03:22:40.709381 3072 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:22:40.749244 kubelet[3072]: I0527 03:22:40.748805 3072 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:40.749244 kubelet[3072]: I0527 03:22:40.749170 3072 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:40.759397 kubelet[3072]: I0527 03:22:40.759343 3072 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 03:22:40.759397 kubelet[3072]: E0527 03:22:40.759397 3072 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.0.0-a-98ca04e8ee\" already exists" pod="kube-system/kube-apiserver-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:40.761890 kubelet[3072]: I0527 03:22:40.761441 3072 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 03:22:40.761890 kubelet[3072]: E0527 03:22:40.761482 3072 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.0.0-a-98ca04e8ee\" already exists" pod="kube-system/kube-scheduler-ci-4344.0.0-a-98ca04e8ee" May 27 03:22:40.811448 kubelet[3072]: I0527 03:22:40.811398 3072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.0.0-a-98ca04e8ee" podStartSLOduration=1.811383475 podStartE2EDuration="1.811383475s" podCreationTimestamp="2025-05-27 03:22:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:22:40.797163605 +0000 UTC m=+1.166075369" watchObservedRunningTime="2025-05-27 03:22:40.811383475 +0000 UTC m=+1.180295223" May 27 03:22:40.821458 kubelet[3072]: I0527 03:22:40.821423 3072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-98ca04e8ee" podStartSLOduration=1.821412199 podStartE2EDuration="1.821412199s" podCreationTimestamp="2025-05-27 03:22:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:22:40.81190483 +0000 UTC m=+1.180816586" watchObservedRunningTime="2025-05-27 03:22:40.821412199 +0000 UTC m=+1.190324124" May 27 03:22:40.832257 kubelet[3072]: I0527 03:22:40.831968 3072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.0.0-a-98ca04e8ee" podStartSLOduration=1.831958018 podStartE2EDuration="1.831958018s" podCreationTimestamp="2025-05-27 03:22:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:22:40.821838331 +0000 UTC m=+1.190750100" watchObservedRunningTime="2025-05-27 03:22:40.831958018 +0000 UTC m=+1.200869770" May 27 03:22:45.386967 kubelet[3072]: I0527 03:22:45.386926 3072 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 03:22:45.387456 containerd[1720]: time="2025-05-27T03:22:45.387322589Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 03:22:45.387650 kubelet[3072]: I0527 03:22:45.387566 3072 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 03:22:46.389659 systemd[1]: Created slice kubepods-besteffort-pod04b22d12_4862_4ca1_8e81_e40e9dde506d.slice - libcontainer container kubepods-besteffort-pod04b22d12_4862_4ca1_8e81_e40e9dde506d.slice. May 27 03:22:46.454568 kubelet[3072]: I0527 03:22:46.454517 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04b22d12-4862-4ca1-8e81-e40e9dde506d-lib-modules\") pod \"kube-proxy-6xcsr\" (UID: \"04b22d12-4862-4ca1-8e81-e40e9dde506d\") " pod="kube-system/kube-proxy-6xcsr" May 27 03:22:46.454568 kubelet[3072]: I0527 03:22:46.454549 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hk2w\" (UniqueName: \"kubernetes.io/projected/04b22d12-4862-4ca1-8e81-e40e9dde506d-kube-api-access-7hk2w\") pod \"kube-proxy-6xcsr\" (UID: \"04b22d12-4862-4ca1-8e81-e40e9dde506d\") " pod="kube-system/kube-proxy-6xcsr" May 27 03:22:46.454568 kubelet[3072]: I0527 03:22:46.454574 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/04b22d12-4862-4ca1-8e81-e40e9dde506d-kube-proxy\") pod \"kube-proxy-6xcsr\" (UID: \"04b22d12-4862-4ca1-8e81-e40e9dde506d\") " pod="kube-system/kube-proxy-6xcsr" May 27 03:22:46.454943 kubelet[3072]: I0527 03:22:46.454592 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04b22d12-4862-4ca1-8e81-e40e9dde506d-xtables-lock\") pod \"kube-proxy-6xcsr\" (UID: \"04b22d12-4862-4ca1-8e81-e40e9dde506d\") " pod="kube-system/kube-proxy-6xcsr" May 27 03:22:46.606179 systemd[1]: Created slice kubepods-besteffort-pod1fd793c7_dd00_4e25_980b_ed425e38088e.slice - libcontainer container kubepods-besteffort-pod1fd793c7_dd00_4e25_980b_ed425e38088e.slice. May 27 03:22:46.656117 kubelet[3072]: I0527 03:22:46.656019 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1fd793c7-dd00-4e25-980b-ed425e38088e-var-lib-calico\") pod \"tigera-operator-844669ff44-vpwwd\" (UID: \"1fd793c7-dd00-4e25-980b-ed425e38088e\") " pod="tigera-operator/tigera-operator-844669ff44-vpwwd" May 27 03:22:46.656117 kubelet[3072]: I0527 03:22:46.656072 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx45k\" (UniqueName: \"kubernetes.io/projected/1fd793c7-dd00-4e25-980b-ed425e38088e-kube-api-access-sx45k\") pod \"tigera-operator-844669ff44-vpwwd\" (UID: \"1fd793c7-dd00-4e25-980b-ed425e38088e\") " pod="tigera-operator/tigera-operator-844669ff44-vpwwd" May 27 03:22:46.698100 containerd[1720]: time="2025-05-27T03:22:46.698066397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6xcsr,Uid:04b22d12-4862-4ca1-8e81-e40e9dde506d,Namespace:kube-system,Attempt:0,}" May 27 03:22:46.730514 containerd[1720]: time="2025-05-27T03:22:46.730409050Z" level=info msg="connecting to shim 53ce1f10c0483e4a6e90dd88009745782dd4f7cf01ec1ee9cccb4a0f859a5a56" address="unix:///run/containerd/s/052be2e16a74aec319814b1f49d1c60a400d336eb59e54085435b88ef6ed586d" namespace=k8s.io protocol=ttrpc version=3 May 27 03:22:46.756115 systemd[1]: Started cri-containerd-53ce1f10c0483e4a6e90dd88009745782dd4f7cf01ec1ee9cccb4a0f859a5a56.scope - libcontainer container 53ce1f10c0483e4a6e90dd88009745782dd4f7cf01ec1ee9cccb4a0f859a5a56. May 27 03:22:46.781093 containerd[1720]: time="2025-05-27T03:22:46.781048642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6xcsr,Uid:04b22d12-4862-4ca1-8e81-e40e9dde506d,Namespace:kube-system,Attempt:0,} returns sandbox id \"53ce1f10c0483e4a6e90dd88009745782dd4f7cf01ec1ee9cccb4a0f859a5a56\"" May 27 03:22:46.787659 containerd[1720]: time="2025-05-27T03:22:46.787636883Z" level=info msg="CreateContainer within sandbox \"53ce1f10c0483e4a6e90dd88009745782dd4f7cf01ec1ee9cccb4a0f859a5a56\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 03:22:46.805740 containerd[1720]: time="2025-05-27T03:22:46.804769042Z" level=info msg="Container 2683068daf116f1b8fe4722c0c1b7e897109869f26b20183cc46f4ceb0f95576: CDI devices from CRI Config.CDIDevices: []" May 27 03:22:46.820835 containerd[1720]: time="2025-05-27T03:22:46.820806298Z" level=info msg="CreateContainer within sandbox \"53ce1f10c0483e4a6e90dd88009745782dd4f7cf01ec1ee9cccb4a0f859a5a56\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2683068daf116f1b8fe4722c0c1b7e897109869f26b20183cc46f4ceb0f95576\"" May 27 03:22:46.821208 containerd[1720]: time="2025-05-27T03:22:46.821182142Z" level=info msg="StartContainer for \"2683068daf116f1b8fe4722c0c1b7e897109869f26b20183cc46f4ceb0f95576\"" May 27 03:22:46.822708 containerd[1720]: time="2025-05-27T03:22:46.822658895Z" level=info msg="connecting to shim 2683068daf116f1b8fe4722c0c1b7e897109869f26b20183cc46f4ceb0f95576" address="unix:///run/containerd/s/052be2e16a74aec319814b1f49d1c60a400d336eb59e54085435b88ef6ed586d" protocol=ttrpc version=3 May 27 03:22:46.839126 systemd[1]: Started cri-containerd-2683068daf116f1b8fe4722c0c1b7e897109869f26b20183cc46f4ceb0f95576.scope - libcontainer container 2683068daf116f1b8fe4722c0c1b7e897109869f26b20183cc46f4ceb0f95576. May 27 03:22:46.865662 containerd[1720]: time="2025-05-27T03:22:46.865594937Z" level=info msg="StartContainer for \"2683068daf116f1b8fe4722c0c1b7e897109869f26b20183cc46f4ceb0f95576\" returns successfully" May 27 03:22:46.910107 containerd[1720]: time="2025-05-27T03:22:46.910027734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-vpwwd,Uid:1fd793c7-dd00-4e25-980b-ed425e38088e,Namespace:tigera-operator,Attempt:0,}" May 27 03:22:46.944269 containerd[1720]: time="2025-05-27T03:22:46.943982544Z" level=info msg="connecting to shim 31c39c3dca3ef6a4b8623436647fcd5e9795bf1dbbd856a2c4623ecf540424a6" address="unix:///run/containerd/s/f2b0737177f89e381ea3b1cfcfc20a3f0faa627b4181f8337b57014c69139188" namespace=k8s.io protocol=ttrpc version=3 May 27 03:22:46.966030 systemd[1]: Started cri-containerd-31c39c3dca3ef6a4b8623436647fcd5e9795bf1dbbd856a2c4623ecf540424a6.scope - libcontainer container 31c39c3dca3ef6a4b8623436647fcd5e9795bf1dbbd856a2c4623ecf540424a6. May 27 03:22:47.004369 containerd[1720]: time="2025-05-27T03:22:47.004338669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-vpwwd,Uid:1fd793c7-dd00-4e25-980b-ed425e38088e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"31c39c3dca3ef6a4b8623436647fcd5e9795bf1dbbd856a2c4623ecf540424a6\"" May 27 03:22:47.005490 containerd[1720]: time="2025-05-27T03:22:47.005473141Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 03:22:47.770873 kubelet[3072]: I0527 03:22:47.770776 3072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6xcsr" podStartSLOduration=1.7707539319999999 podStartE2EDuration="1.770753932s" podCreationTimestamp="2025-05-27 03:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:22:47.77005132 +0000 UTC m=+8.138963074" watchObservedRunningTime="2025-05-27 03:22:47.770753932 +0000 UTC m=+8.139665689" May 27 03:22:48.289803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2521274181.mount: Deactivated successfully. May 27 03:22:48.653848 containerd[1720]: time="2025-05-27T03:22:48.653806660Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:48.655784 containerd[1720]: time="2025-05-27T03:22:48.655750968Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 27 03:22:48.658061 containerd[1720]: time="2025-05-27T03:22:48.658020784Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:48.661046 containerd[1720]: time="2025-05-27T03:22:48.660995838Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:48.661713 containerd[1720]: time="2025-05-27T03:22:48.661414261Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 1.655839533s" May 27 03:22:48.661713 containerd[1720]: time="2025-05-27T03:22:48.661442306Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 27 03:22:48.666365 containerd[1720]: time="2025-05-27T03:22:48.666341045Z" level=info msg="CreateContainer within sandbox \"31c39c3dca3ef6a4b8623436647fcd5e9795bf1dbbd856a2c4623ecf540424a6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 03:22:48.680892 containerd[1720]: time="2025-05-27T03:22:48.680774239Z" level=info msg="Container 21febc7fe13a94696a95c39965e251ad7843ab959cbb369bee69936fad5de8b8: CDI devices from CRI Config.CDIDevices: []" May 27 03:22:48.691635 containerd[1720]: time="2025-05-27T03:22:48.691610887Z" level=info msg="CreateContainer within sandbox \"31c39c3dca3ef6a4b8623436647fcd5e9795bf1dbbd856a2c4623ecf540424a6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"21febc7fe13a94696a95c39965e251ad7843ab959cbb369bee69936fad5de8b8\"" May 27 03:22:48.691966 containerd[1720]: time="2025-05-27T03:22:48.691949815Z" level=info msg="StartContainer for \"21febc7fe13a94696a95c39965e251ad7843ab959cbb369bee69936fad5de8b8\"" May 27 03:22:48.692883 containerd[1720]: time="2025-05-27T03:22:48.692821162Z" level=info msg="connecting to shim 21febc7fe13a94696a95c39965e251ad7843ab959cbb369bee69936fad5de8b8" address="unix:///run/containerd/s/f2b0737177f89e381ea3b1cfcfc20a3f0faa627b4181f8337b57014c69139188" protocol=ttrpc version=3 May 27 03:22:48.713030 systemd[1]: Started cri-containerd-21febc7fe13a94696a95c39965e251ad7843ab959cbb369bee69936fad5de8b8.scope - libcontainer container 21febc7fe13a94696a95c39965e251ad7843ab959cbb369bee69936fad5de8b8. May 27 03:22:48.736281 containerd[1720]: time="2025-05-27T03:22:48.736255243Z" level=info msg="StartContainer for \"21febc7fe13a94696a95c39965e251ad7843ab959cbb369bee69936fad5de8b8\" returns successfully" May 27 03:22:51.010699 kubelet[3072]: I0527 03:22:51.010367 3072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-vpwwd" podStartSLOduration=3.353524573 podStartE2EDuration="5.010347093s" podCreationTimestamp="2025-05-27 03:22:46 +0000 UTC" firstStartedPulling="2025-05-27 03:22:47.005173752 +0000 UTC m=+7.374085502" lastFinishedPulling="2025-05-27 03:22:48.661996267 +0000 UTC m=+9.030908022" observedRunningTime="2025-05-27 03:22:48.772663198 +0000 UTC m=+9.141574951" watchObservedRunningTime="2025-05-27 03:22:51.010347093 +0000 UTC m=+11.379258848" May 27 03:22:54.158261 sudo[2152]: pam_unix(sudo:session): session closed for user root May 27 03:22:54.260918 sshd[2151]: Connection closed by 10.200.16.10 port 39584 May 27 03:22:54.264051 sshd-session[2149]: pam_unix(sshd:session): session closed for user core May 27 03:22:54.267778 systemd-logind[1703]: Session 9 logged out. Waiting for processes to exit. May 27 03:22:54.269323 systemd[1]: sshd@6-10.200.8.16:22-10.200.16.10:39584.service: Deactivated successfully. May 27 03:22:54.273735 systemd[1]: session-9.scope: Deactivated successfully. May 27 03:22:54.275260 systemd[1]: session-9.scope: Consumed 3.295s CPU time, 229.5M memory peak. May 27 03:22:54.280523 systemd-logind[1703]: Removed session 9. May 27 03:22:57.014314 systemd[1]: Created slice kubepods-besteffort-podd8e91de8_8a59_43b3_b0b5_e061a8280221.slice - libcontainer container kubepods-besteffort-podd8e91de8_8a59_43b3_b0b5_e061a8280221.slice. May 27 03:22:57.021570 kubelet[3072]: I0527 03:22:57.021343 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8e91de8-8a59-43b3-b0b5-e061a8280221-tigera-ca-bundle\") pod \"calico-typha-5b8db7795b-2gntd\" (UID: \"d8e91de8-8a59-43b3-b0b5-e061a8280221\") " pod="calico-system/calico-typha-5b8db7795b-2gntd" May 27 03:22:57.021570 kubelet[3072]: I0527 03:22:57.021387 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d8e91de8-8a59-43b3-b0b5-e061a8280221-typha-certs\") pod \"calico-typha-5b8db7795b-2gntd\" (UID: \"d8e91de8-8a59-43b3-b0b5-e061a8280221\") " pod="calico-system/calico-typha-5b8db7795b-2gntd" May 27 03:22:57.021570 kubelet[3072]: I0527 03:22:57.021408 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v7kr\" (UniqueName: \"kubernetes.io/projected/d8e91de8-8a59-43b3-b0b5-e061a8280221-kube-api-access-8v7kr\") pod \"calico-typha-5b8db7795b-2gntd\" (UID: \"d8e91de8-8a59-43b3-b0b5-e061a8280221\") " pod="calico-system/calico-typha-5b8db7795b-2gntd" May 27 03:22:57.208919 systemd[1]: Created slice kubepods-besteffort-pod7d25a9e6_3da5_4784_b261_c5062612f774.slice - libcontainer container kubepods-besteffort-pod7d25a9e6_3da5_4784_b261_c5062612f774.slice. May 27 03:22:57.223367 kubelet[3072]: I0527 03:22:57.223336 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d25a9e6-3da5-4784-b261-c5062612f774-xtables-lock\") pod \"calico-node-gx8q5\" (UID: \"7d25a9e6-3da5-4784-b261-c5062612f774\") " pod="calico-system/calico-node-gx8q5" May 27 03:22:57.223467 kubelet[3072]: I0527 03:22:57.223371 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7d25a9e6-3da5-4784-b261-c5062612f774-var-lib-calico\") pod \"calico-node-gx8q5\" (UID: \"7d25a9e6-3da5-4784-b261-c5062612f774\") " pod="calico-system/calico-node-gx8q5" May 27 03:22:57.223467 kubelet[3072]: I0527 03:22:57.223388 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7d25a9e6-3da5-4784-b261-c5062612f774-cni-log-dir\") pod \"calico-node-gx8q5\" (UID: \"7d25a9e6-3da5-4784-b261-c5062612f774\") " pod="calico-system/calico-node-gx8q5" May 27 03:22:57.223467 kubelet[3072]: I0527 03:22:57.223403 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7d25a9e6-3da5-4784-b261-c5062612f774-cni-net-dir\") pod \"calico-node-gx8q5\" (UID: \"7d25a9e6-3da5-4784-b261-c5062612f774\") " pod="calico-system/calico-node-gx8q5" May 27 03:22:57.223467 kubelet[3072]: I0527 03:22:57.223421 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7d25a9e6-3da5-4784-b261-c5062612f774-var-run-calico\") pod \"calico-node-gx8q5\" (UID: \"7d25a9e6-3da5-4784-b261-c5062612f774\") " pod="calico-system/calico-node-gx8q5" May 27 03:22:57.223467 kubelet[3072]: I0527 03:22:57.223441 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d25a9e6-3da5-4784-b261-c5062612f774-tigera-ca-bundle\") pod \"calico-node-gx8q5\" (UID: \"7d25a9e6-3da5-4784-b261-c5062612f774\") " pod="calico-system/calico-node-gx8q5" May 27 03:22:57.223569 kubelet[3072]: I0527 03:22:57.223459 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7d25a9e6-3da5-4784-b261-c5062612f774-cni-bin-dir\") pod \"calico-node-gx8q5\" (UID: \"7d25a9e6-3da5-4784-b261-c5062612f774\") " pod="calico-system/calico-node-gx8q5" May 27 03:22:57.223569 kubelet[3072]: I0527 03:22:57.223489 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7d25a9e6-3da5-4784-b261-c5062612f774-flexvol-driver-host\") pod \"calico-node-gx8q5\" (UID: \"7d25a9e6-3da5-4784-b261-c5062612f774\") " pod="calico-system/calico-node-gx8q5" May 27 03:22:57.223569 kubelet[3072]: I0527 03:22:57.223509 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d25a9e6-3da5-4784-b261-c5062612f774-lib-modules\") pod \"calico-node-gx8q5\" (UID: \"7d25a9e6-3da5-4784-b261-c5062612f774\") " pod="calico-system/calico-node-gx8q5" May 27 03:22:57.223569 kubelet[3072]: I0527 03:22:57.223532 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7d25a9e6-3da5-4784-b261-c5062612f774-node-certs\") pod \"calico-node-gx8q5\" (UID: \"7d25a9e6-3da5-4784-b261-c5062612f774\") " pod="calico-system/calico-node-gx8q5" May 27 03:22:57.223569 kubelet[3072]: I0527 03:22:57.223549 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7d25a9e6-3da5-4784-b261-c5062612f774-policysync\") pod \"calico-node-gx8q5\" (UID: \"7d25a9e6-3da5-4784-b261-c5062612f774\") " pod="calico-system/calico-node-gx8q5" May 27 03:22:57.223669 kubelet[3072]: I0527 03:22:57.223570 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sxd5\" (UniqueName: \"kubernetes.io/projected/7d25a9e6-3da5-4784-b261-c5062612f774-kube-api-access-8sxd5\") pod \"calico-node-gx8q5\" (UID: \"7d25a9e6-3da5-4784-b261-c5062612f774\") " pod="calico-system/calico-node-gx8q5" May 27 03:22:57.320737 containerd[1720]: time="2025-05-27T03:22:57.320612685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b8db7795b-2gntd,Uid:d8e91de8-8a59-43b3-b0b5-e061a8280221,Namespace:calico-system,Attempt:0,}" May 27 03:22:57.325895 kubelet[3072]: E0527 03:22:57.325678 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.325895 kubelet[3072]: W0527 03:22:57.325715 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.325895 kubelet[3072]: E0527 03:22:57.325745 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.326030 kubelet[3072]: E0527 03:22:57.325986 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.326030 kubelet[3072]: W0527 03:22:57.325993 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.326100 kubelet[3072]: E0527 03:22:57.326004 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.326961 kubelet[3072]: E0527 03:22:57.326729 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.326961 kubelet[3072]: W0527 03:22:57.326743 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.326961 kubelet[3072]: E0527 03:22:57.326758 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.327894 kubelet[3072]: E0527 03:22:57.327241 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.327894 kubelet[3072]: W0527 03:22:57.327261 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.327894 kubelet[3072]: E0527 03:22:57.327274 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.327894 kubelet[3072]: E0527 03:22:57.327753 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.327894 kubelet[3072]: W0527 03:22:57.327761 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.328348 kubelet[3072]: E0527 03:22:57.328329 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.329086 kubelet[3072]: E0527 03:22:57.329075 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.329151 kubelet[3072]: W0527 03:22:57.329143 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.329191 kubelet[3072]: E0527 03:22:57.329185 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.329781 kubelet[3072]: E0527 03:22:57.329772 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.329830 kubelet[3072]: W0527 03:22:57.329824 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.329869 kubelet[3072]: E0527 03:22:57.329863 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.330017 kubelet[3072]: E0527 03:22:57.330012 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.330053 kubelet[3072]: W0527 03:22:57.330048 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.330089 kubelet[3072]: E0527 03:22:57.330084 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.331442 kubelet[3072]: E0527 03:22:57.331429 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.333966 kubelet[3072]: W0527 03:22:57.331481 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.333966 kubelet[3072]: E0527 03:22:57.331494 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.333966 kubelet[3072]: E0527 03:22:57.331586 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.333966 kubelet[3072]: W0527 03:22:57.331591 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.333966 kubelet[3072]: E0527 03:22:57.331597 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.333966 kubelet[3072]: E0527 03:22:57.331675 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.333966 kubelet[3072]: W0527 03:22:57.331679 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.333966 kubelet[3072]: E0527 03:22:57.331685 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.333966 kubelet[3072]: E0527 03:22:57.331784 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.333966 kubelet[3072]: W0527 03:22:57.331788 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.334196 kubelet[3072]: E0527 03:22:57.331794 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.334196 kubelet[3072]: E0527 03:22:57.332469 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.334196 kubelet[3072]: W0527 03:22:57.332481 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.334196 kubelet[3072]: E0527 03:22:57.332502 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.334410 kubelet[3072]: E0527 03:22:57.334398 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.334458 kubelet[3072]: W0527 03:22:57.334444 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.334498 kubelet[3072]: E0527 03:22:57.334491 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.337329 kubelet[3072]: E0527 03:22:57.337312 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.337399 kubelet[3072]: W0527 03:22:57.337336 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.337399 kubelet[3072]: E0527 03:22:57.337349 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.337483 kubelet[3072]: E0527 03:22:57.337470 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.337507 kubelet[3072]: W0527 03:22:57.337484 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.337507 kubelet[3072]: E0527 03:22:57.337493 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.337639 kubelet[3072]: E0527 03:22:57.337584 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.337639 kubelet[3072]: W0527 03:22:57.337588 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.337639 kubelet[3072]: E0527 03:22:57.337594 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.337697 kubelet[3072]: E0527 03:22:57.337674 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.337697 kubelet[3072]: W0527 03:22:57.337679 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.337697 kubelet[3072]: E0527 03:22:57.337685 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.338043 kubelet[3072]: E0527 03:22:57.337782 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.338043 kubelet[3072]: W0527 03:22:57.337789 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.338043 kubelet[3072]: E0527 03:22:57.337796 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.338043 kubelet[3072]: E0527 03:22:57.337938 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.338043 kubelet[3072]: W0527 03:22:57.337944 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.338043 kubelet[3072]: E0527 03:22:57.337950 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.338043 kubelet[3072]: E0527 03:22:57.338041 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.338197 kubelet[3072]: W0527 03:22:57.338045 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.338197 kubelet[3072]: E0527 03:22:57.338055 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.338197 kubelet[3072]: E0527 03:22:57.338152 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.338197 kubelet[3072]: W0527 03:22:57.338157 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.338197 kubelet[3072]: E0527 03:22:57.338163 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.338295 kubelet[3072]: E0527 03:22:57.338246 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.338295 kubelet[3072]: W0527 03:22:57.338250 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.338295 kubelet[3072]: E0527 03:22:57.338255 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.338813 kubelet[3072]: E0527 03:22:57.338392 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.338813 kubelet[3072]: W0527 03:22:57.338399 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.338813 kubelet[3072]: E0527 03:22:57.338412 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.338813 kubelet[3072]: E0527 03:22:57.338564 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.338813 kubelet[3072]: W0527 03:22:57.338569 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.338813 kubelet[3072]: E0527 03:22:57.338575 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.349902 kubelet[3072]: E0527 03:22:57.349855 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.349902 kubelet[3072]: W0527 03:22:57.349869 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.349902 kubelet[3072]: E0527 03:22:57.349892 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.366731 containerd[1720]: time="2025-05-27T03:22:57.366699842Z" level=info msg="connecting to shim 1a7e5a4c1d879029a073f52dd0777d04cd226d3312d90bb692fafd0451420262" address="unix:///run/containerd/s/c1fe9a9175624896a58f34af818943767a4d472019d85621e801537987d0bf9e" namespace=k8s.io protocol=ttrpc version=3 May 27 03:22:57.385024 systemd[1]: Started cri-containerd-1a7e5a4c1d879029a073f52dd0777d04cd226d3312d90bb692fafd0451420262.scope - libcontainer container 1a7e5a4c1d879029a073f52dd0777d04cd226d3312d90bb692fafd0451420262. May 27 03:22:57.425972 containerd[1720]: time="2025-05-27T03:22:57.425941775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b8db7795b-2gntd,Uid:d8e91de8-8a59-43b3-b0b5-e061a8280221,Namespace:calico-system,Attempt:0,} returns sandbox id \"1a7e5a4c1d879029a073f52dd0777d04cd226d3312d90bb692fafd0451420262\"" May 27 03:22:57.428206 containerd[1720]: time="2025-05-27T03:22:57.428115739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 03:22:57.500289 kubelet[3072]: E0527 03:22:57.500242 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pft9h" podUID="aaa94a96-399c-4345-92d5-d811c3bc141a" May 27 03:22:57.511569 kubelet[3072]: E0527 03:22:57.511443 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.511569 kubelet[3072]: W0527 03:22:57.511456 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.511569 kubelet[3072]: E0527 03:22:57.511480 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.512153 kubelet[3072]: E0527 03:22:57.512023 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.512153 kubelet[3072]: W0527 03:22:57.512035 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.512153 kubelet[3072]: E0527 03:22:57.512046 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.512640 kubelet[3072]: E0527 03:22:57.512561 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.512640 kubelet[3072]: W0527 03:22:57.512574 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.512640 kubelet[3072]: E0527 03:22:57.512585 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.513104 kubelet[3072]: E0527 03:22:57.513069 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.513204 kubelet[3072]: W0527 03:22:57.513150 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.513204 kubelet[3072]: E0527 03:22:57.513187 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.513949 kubelet[3072]: E0527 03:22:57.513894 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.514094 kubelet[3072]: W0527 03:22:57.514013 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.514094 kubelet[3072]: E0527 03:22:57.514026 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.514373 kubelet[3072]: E0527 03:22:57.514324 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.514373 kubelet[3072]: W0527 03:22:57.514334 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.514373 kubelet[3072]: E0527 03:22:57.514345 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.514574 containerd[1720]: time="2025-05-27T03:22:57.514550203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gx8q5,Uid:7d25a9e6-3da5-4784-b261-c5062612f774,Namespace:calico-system,Attempt:0,}" May 27 03:22:57.514785 kubelet[3072]: E0527 03:22:57.514776 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.514979 kubelet[3072]: W0527 03:22:57.514906 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.514979 kubelet[3072]: E0527 03:22:57.514924 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.515446 kubelet[3072]: E0527 03:22:57.515399 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.515446 kubelet[3072]: W0527 03:22:57.515417 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.515446 kubelet[3072]: E0527 03:22:57.515429 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.515809 kubelet[3072]: E0527 03:22:57.515790 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.515809 kubelet[3072]: W0527 03:22:57.515804 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.515970 kubelet[3072]: E0527 03:22:57.515816 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.515970 kubelet[3072]: E0527 03:22:57.515951 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.516057 kubelet[3072]: W0527 03:22:57.515991 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.516057 kubelet[3072]: E0527 03:22:57.516000 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.516325 kubelet[3072]: E0527 03:22:57.516106 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.516325 kubelet[3072]: W0527 03:22:57.516120 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.516325 kubelet[3072]: E0527 03:22:57.516129 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.516325 kubelet[3072]: E0527 03:22:57.516268 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.516325 kubelet[3072]: W0527 03:22:57.516274 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.516325 kubelet[3072]: E0527 03:22:57.516282 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.516505 kubelet[3072]: E0527 03:22:57.516421 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.516505 kubelet[3072]: W0527 03:22:57.516426 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.516505 kubelet[3072]: E0527 03:22:57.516432 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.516569 kubelet[3072]: E0527 03:22:57.516531 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.516569 kubelet[3072]: W0527 03:22:57.516537 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.516569 kubelet[3072]: E0527 03:22:57.516542 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.516723 kubelet[3072]: E0527 03:22:57.516646 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.516723 kubelet[3072]: W0527 03:22:57.516668 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.516723 kubelet[3072]: E0527 03:22:57.516675 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.516940 kubelet[3072]: E0527 03:22:57.516787 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.516940 kubelet[3072]: W0527 03:22:57.516792 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.516940 kubelet[3072]: E0527 03:22:57.516798 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.516940 kubelet[3072]: E0527 03:22:57.516939 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.517058 kubelet[3072]: W0527 03:22:57.516944 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.517058 kubelet[3072]: E0527 03:22:57.516951 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.517058 kubelet[3072]: E0527 03:22:57.517045 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.517058 kubelet[3072]: W0527 03:22:57.517050 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.517188 kubelet[3072]: E0527 03:22:57.517065 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.517188 kubelet[3072]: E0527 03:22:57.517168 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.517188 kubelet[3072]: W0527 03:22:57.517175 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.517188 kubelet[3072]: E0527 03:22:57.517182 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.517313 kubelet[3072]: E0527 03:22:57.517289 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.517313 kubelet[3072]: W0527 03:22:57.517294 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.517313 kubelet[3072]: E0527 03:22:57.517301 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.525846 kubelet[3072]: E0527 03:22:57.525830 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.525846 kubelet[3072]: W0527 03:22:57.525845 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.525948 kubelet[3072]: E0527 03:22:57.525858 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.525948 kubelet[3072]: I0527 03:22:57.525911 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aaa94a96-399c-4345-92d5-d811c3bc141a-registration-dir\") pod \"csi-node-driver-pft9h\" (UID: \"aaa94a96-399c-4345-92d5-d811c3bc141a\") " pod="calico-system/csi-node-driver-pft9h" May 27 03:22:57.526309 kubelet[3072]: E0527 03:22:57.526056 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.526309 kubelet[3072]: W0527 03:22:57.526064 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.526309 kubelet[3072]: E0527 03:22:57.526072 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.526309 kubelet[3072]: I0527 03:22:57.526092 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aaa94a96-399c-4345-92d5-d811c3bc141a-kubelet-dir\") pod \"csi-node-driver-pft9h\" (UID: \"aaa94a96-399c-4345-92d5-d811c3bc141a\") " pod="calico-system/csi-node-driver-pft9h" May 27 03:22:57.526309 kubelet[3072]: E0527 03:22:57.526194 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.526309 kubelet[3072]: W0527 03:22:57.526211 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.526309 kubelet[3072]: E0527 03:22:57.526218 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.526309 kubelet[3072]: I0527 03:22:57.526238 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55hwd\" (UniqueName: \"kubernetes.io/projected/aaa94a96-399c-4345-92d5-d811c3bc141a-kube-api-access-55hwd\") pod \"csi-node-driver-pft9h\" (UID: \"aaa94a96-399c-4345-92d5-d811c3bc141a\") " pod="calico-system/csi-node-driver-pft9h" May 27 03:22:57.526502 kubelet[3072]: E0527 03:22:57.526337 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.526502 kubelet[3072]: W0527 03:22:57.526364 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.526502 kubelet[3072]: E0527 03:22:57.526372 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.526502 kubelet[3072]: I0527 03:22:57.526390 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aaa94a96-399c-4345-92d5-d811c3bc141a-socket-dir\") pod \"csi-node-driver-pft9h\" (UID: \"aaa94a96-399c-4345-92d5-d811c3bc141a\") " pod="calico-system/csi-node-driver-pft9h" May 27 03:22:57.526588 kubelet[3072]: E0527 03:22:57.526518 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.526588 kubelet[3072]: W0527 03:22:57.526523 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.526588 kubelet[3072]: E0527 03:22:57.526529 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.526654 kubelet[3072]: E0527 03:22:57.526644 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.526654 kubelet[3072]: W0527 03:22:57.526648 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.526697 kubelet[3072]: E0527 03:22:57.526655 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.526778 kubelet[3072]: E0527 03:22:57.526766 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.526778 kubelet[3072]: W0527 03:22:57.526776 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.526854 kubelet[3072]: E0527 03:22:57.526783 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.526902 kubelet[3072]: E0527 03:22:57.526896 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.526928 kubelet[3072]: W0527 03:22:57.526903 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.526928 kubelet[3072]: E0527 03:22:57.526910 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.527015 kubelet[3072]: E0527 03:22:57.527003 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.527015 kubelet[3072]: W0527 03:22:57.527011 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.527084 kubelet[3072]: E0527 03:22:57.527017 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.527120 kubelet[3072]: E0527 03:22:57.527112 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.527144 kubelet[3072]: W0527 03:22:57.527119 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.527144 kubelet[3072]: E0527 03:22:57.527126 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.527222 kubelet[3072]: E0527 03:22:57.527211 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.527246 kubelet[3072]: W0527 03:22:57.527219 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.527246 kubelet[3072]: E0527 03:22:57.527244 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.527285 kubelet[3072]: I0527 03:22:57.527263 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/aaa94a96-399c-4345-92d5-d811c3bc141a-varrun\") pod \"csi-node-driver-pft9h\" (UID: \"aaa94a96-399c-4345-92d5-d811c3bc141a\") " pod="calico-system/csi-node-driver-pft9h" May 27 03:22:57.527432 kubelet[3072]: E0527 03:22:57.527395 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.527432 kubelet[3072]: W0527 03:22:57.527404 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.527432 kubelet[3072]: E0527 03:22:57.527410 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.527512 kubelet[3072]: E0527 03:22:57.527510 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.527533 kubelet[3072]: W0527 03:22:57.527514 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.527555 kubelet[3072]: E0527 03:22:57.527520 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.527636 kubelet[3072]: E0527 03:22:57.527627 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.527636 kubelet[3072]: W0527 03:22:57.527635 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.527689 kubelet[3072]: E0527 03:22:57.527642 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.527730 kubelet[3072]: E0527 03:22:57.527720 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.527730 kubelet[3072]: W0527 03:22:57.527727 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.527765 kubelet[3072]: E0527 03:22:57.527733 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.548342 containerd[1720]: time="2025-05-27T03:22:57.548302459Z" level=info msg="connecting to shim 23d4ef23e910f683fa968cd28b903825d6fcef70307f62044717bfe7b5fb6db7" address="unix:///run/containerd/s/a1622089cfb5113524fdb7a47f37c3395067d25acd3e4d886eed3ad74652145c" namespace=k8s.io protocol=ttrpc version=3 May 27 03:22:57.568475 systemd[1]: Started cri-containerd-23d4ef23e910f683fa968cd28b903825d6fcef70307f62044717bfe7b5fb6db7.scope - libcontainer container 23d4ef23e910f683fa968cd28b903825d6fcef70307f62044717bfe7b5fb6db7. May 27 03:22:57.593441 containerd[1720]: time="2025-05-27T03:22:57.593418354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gx8q5,Uid:7d25a9e6-3da5-4784-b261-c5062612f774,Namespace:calico-system,Attempt:0,} returns sandbox id \"23d4ef23e910f683fa968cd28b903825d6fcef70307f62044717bfe7b5fb6db7\"" May 27 03:22:57.628569 kubelet[3072]: E0527 03:22:57.628551 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.628569 kubelet[3072]: W0527 03:22:57.628568 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.628752 kubelet[3072]: E0527 03:22:57.628580 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.628752 kubelet[3072]: E0527 03:22:57.628722 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.628752 kubelet[3072]: W0527 03:22:57.628727 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.628752 kubelet[3072]: E0527 03:22:57.628735 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.628962 kubelet[3072]: E0527 03:22:57.628858 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.628962 kubelet[3072]: W0527 03:22:57.628864 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.628962 kubelet[3072]: E0527 03:22:57.628871 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.629209 kubelet[3072]: E0527 03:22:57.629000 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.629209 kubelet[3072]: W0527 03:22:57.629006 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.629209 kubelet[3072]: E0527 03:22:57.629014 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.629209 kubelet[3072]: E0527 03:22:57.629138 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.629209 kubelet[3072]: W0527 03:22:57.629143 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.629209 kubelet[3072]: E0527 03:22:57.629149 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.629517 kubelet[3072]: E0527 03:22:57.629279 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.629517 kubelet[3072]: W0527 03:22:57.629283 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.629517 kubelet[3072]: E0527 03:22:57.629290 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.629517 kubelet[3072]: E0527 03:22:57.629363 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.629517 kubelet[3072]: W0527 03:22:57.629368 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.629517 kubelet[3072]: E0527 03:22:57.629381 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.629517 kubelet[3072]: E0527 03:22:57.629463 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.629517 kubelet[3072]: W0527 03:22:57.629467 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.629517 kubelet[3072]: E0527 03:22:57.629474 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.629910 kubelet[3072]: E0527 03:22:57.629590 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.629910 kubelet[3072]: W0527 03:22:57.629594 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.629910 kubelet[3072]: E0527 03:22:57.629600 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.629910 kubelet[3072]: E0527 03:22:57.629703 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.629910 kubelet[3072]: W0527 03:22:57.629707 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.629910 kubelet[3072]: E0527 03:22:57.629713 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.629910 kubelet[3072]: E0527 03:22:57.629808 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.629910 kubelet[3072]: W0527 03:22:57.629813 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.629910 kubelet[3072]: E0527 03:22:57.629819 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.629910 kubelet[3072]: E0527 03:22:57.629913 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.630306 kubelet[3072]: W0527 03:22:57.629918 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.630306 kubelet[3072]: E0527 03:22:57.629924 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.630306 kubelet[3072]: E0527 03:22:57.630032 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.630306 kubelet[3072]: W0527 03:22:57.630037 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.630306 kubelet[3072]: E0527 03:22:57.630043 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.630306 kubelet[3072]: E0527 03:22:57.630131 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.630306 kubelet[3072]: W0527 03:22:57.630135 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.630306 kubelet[3072]: E0527 03:22:57.630141 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.630306 kubelet[3072]: E0527 03:22:57.630239 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.630306 kubelet[3072]: W0527 03:22:57.630243 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.630700 kubelet[3072]: E0527 03:22:57.630249 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.630700 kubelet[3072]: E0527 03:22:57.630335 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.630700 kubelet[3072]: W0527 03:22:57.630340 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.630700 kubelet[3072]: E0527 03:22:57.630345 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.630983 kubelet[3072]: E0527 03:22:57.630970 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.631101 kubelet[3072]: W0527 03:22:57.630985 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.631101 kubelet[3072]: E0527 03:22:57.630997 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.631246 kubelet[3072]: E0527 03:22:57.631233 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.631246 kubelet[3072]: W0527 03:22:57.631242 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.631300 kubelet[3072]: E0527 03:22:57.631250 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.631412 kubelet[3072]: E0527 03:22:57.631374 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.631412 kubelet[3072]: W0527 03:22:57.631379 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.631412 kubelet[3072]: E0527 03:22:57.631386 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.631566 kubelet[3072]: E0527 03:22:57.631493 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.631566 kubelet[3072]: W0527 03:22:57.631498 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.631566 kubelet[3072]: E0527 03:22:57.631505 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.631730 kubelet[3072]: E0527 03:22:57.631720 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.631761 kubelet[3072]: W0527 03:22:57.631731 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.631761 kubelet[3072]: E0527 03:22:57.631740 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.631990 kubelet[3072]: E0527 03:22:57.631978 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.631990 kubelet[3072]: W0527 03:22:57.631990 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.632141 kubelet[3072]: E0527 03:22:57.632000 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.632559 kubelet[3072]: E0527 03:22:57.632543 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.632559 kubelet[3072]: W0527 03:22:57.632557 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.632644 kubelet[3072]: E0527 03:22:57.632568 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.632755 kubelet[3072]: E0527 03:22:57.632686 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.632755 kubelet[3072]: W0527 03:22:57.632692 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.632755 kubelet[3072]: E0527 03:22:57.632698 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.632820 kubelet[3072]: E0527 03:22:57.632807 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.632820 kubelet[3072]: W0527 03:22:57.632812 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.632820 kubelet[3072]: E0527 03:22:57.632818 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:57.644496 kubelet[3072]: E0527 03:22:57.644479 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:22:57.644496 kubelet[3072]: W0527 03:22:57.644495 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:22:57.644585 kubelet[3072]: E0527 03:22:57.644506 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:22:58.822533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3276926159.mount: Deactivated successfully. May 27 03:22:59.696161 containerd[1720]: time="2025-05-27T03:22:59.696117085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:59.698278 containerd[1720]: time="2025-05-27T03:22:59.698215749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 27 03:22:59.700403 containerd[1720]: time="2025-05-27T03:22:59.700364820Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:59.703386 containerd[1720]: time="2025-05-27T03:22:59.703343065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:59.703734 containerd[1720]: time="2025-05-27T03:22:59.703631179Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.27540179s" May 27 03:22:59.703734 containerd[1720]: time="2025-05-27T03:22:59.703659074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 27 03:22:59.705007 containerd[1720]: time="2025-05-27T03:22:59.704966767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 03:22:59.717235 kubelet[3072]: E0527 03:22:59.714475 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pft9h" podUID="aaa94a96-399c-4345-92d5-d811c3bc141a" May 27 03:22:59.725643 containerd[1720]: time="2025-05-27T03:22:59.725620660Z" level=info msg="CreateContainer within sandbox \"1a7e5a4c1d879029a073f52dd0777d04cd226d3312d90bb692fafd0451420262\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 03:22:59.740138 containerd[1720]: time="2025-05-27T03:22:59.740097594Z" level=info msg="Container c8e5066c23fb3cd78f248237fe2e523108f4ceb43f5ecdd093d54d570df64aa8: CDI devices from CRI Config.CDIDevices: []" May 27 03:22:59.743928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3281901879.mount: Deactivated successfully. May 27 03:22:59.758511 containerd[1720]: time="2025-05-27T03:22:59.758488844Z" level=info msg="CreateContainer within sandbox \"1a7e5a4c1d879029a073f52dd0777d04cd226d3312d90bb692fafd0451420262\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c8e5066c23fb3cd78f248237fe2e523108f4ceb43f5ecdd093d54d570df64aa8\"" May 27 03:22:59.759755 containerd[1720]: time="2025-05-27T03:22:59.759730480Z" level=info msg="StartContainer for \"c8e5066c23fb3cd78f248237fe2e523108f4ceb43f5ecdd093d54d570df64aa8\"" May 27 03:22:59.760799 containerd[1720]: time="2025-05-27T03:22:59.760771501Z" level=info msg="connecting to shim c8e5066c23fb3cd78f248237fe2e523108f4ceb43f5ecdd093d54d570df64aa8" address="unix:///run/containerd/s/c1fe9a9175624896a58f34af818943767a4d472019d85621e801537987d0bf9e" protocol=ttrpc version=3 May 27 03:22:59.782113 systemd[1]: Started cri-containerd-c8e5066c23fb3cd78f248237fe2e523108f4ceb43f5ecdd093d54d570df64aa8.scope - libcontainer container c8e5066c23fb3cd78f248237fe2e523108f4ceb43f5ecdd093d54d570df64aa8. May 27 03:22:59.934087 containerd[1720]: time="2025-05-27T03:22:59.934064824Z" level=info msg="StartContainer for \"c8e5066c23fb3cd78f248237fe2e523108f4ceb43f5ecdd093d54d570df64aa8\" returns successfully" May 27 03:23:00.836817 kubelet[3072]: E0527 03:23:00.836710 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.836817 kubelet[3072]: W0527 03:23:00.836732 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.836817 kubelet[3072]: E0527 03:23:00.836752 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.837595 kubelet[3072]: E0527 03:23:00.837180 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.837595 kubelet[3072]: W0527 03:23:00.837193 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.837595 kubelet[3072]: E0527 03:23:00.837209 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.837772 kubelet[3072]: E0527 03:23:00.837736 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.837772 kubelet[3072]: W0527 03:23:00.837747 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.837772 kubelet[3072]: E0527 03:23:00.837801 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.838068 kubelet[3072]: E0527 03:23:00.838000 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.838068 kubelet[3072]: W0527 03:23:00.838009 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.838068 kubelet[3072]: E0527 03:23:00.838021 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.838266 kubelet[3072]: E0527 03:23:00.838257 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.838266 kubelet[3072]: W0527 03:23:00.838265 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.838316 kubelet[3072]: E0527 03:23:00.838274 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.838372 kubelet[3072]: E0527 03:23:00.838364 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.838372 kubelet[3072]: W0527 03:23:00.838370 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.838414 kubelet[3072]: E0527 03:23:00.838376 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.838470 kubelet[3072]: E0527 03:23:00.838462 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.838470 kubelet[3072]: W0527 03:23:00.838469 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.838515 kubelet[3072]: E0527 03:23:00.838475 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.838560 kubelet[3072]: E0527 03:23:00.838552 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.838560 kubelet[3072]: W0527 03:23:00.838558 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.838603 kubelet[3072]: E0527 03:23:00.838564 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.838648 kubelet[3072]: E0527 03:23:00.838641 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.838648 kubelet[3072]: W0527 03:23:00.838646 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.838687 kubelet[3072]: E0527 03:23:00.838652 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.838725 kubelet[3072]: E0527 03:23:00.838717 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.838725 kubelet[3072]: W0527 03:23:00.838724 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.838772 kubelet[3072]: E0527 03:23:00.838729 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.838805 kubelet[3072]: E0527 03:23:00.838796 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.838805 kubelet[3072]: W0527 03:23:00.838802 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.838851 kubelet[3072]: E0527 03:23:00.838807 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.838898 kubelet[3072]: E0527 03:23:00.838892 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.838898 kubelet[3072]: W0527 03:23:00.838898 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.838944 kubelet[3072]: E0527 03:23:00.838903 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.838989 kubelet[3072]: E0527 03:23:00.838981 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.838989 kubelet[3072]: W0527 03:23:00.838986 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.839030 kubelet[3072]: E0527 03:23:00.838992 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.839071 kubelet[3072]: E0527 03:23:00.839064 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.839071 kubelet[3072]: W0527 03:23:00.839070 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.839115 kubelet[3072]: E0527 03:23:00.839076 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.839157 kubelet[3072]: E0527 03:23:00.839146 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.839157 kubelet[3072]: W0527 03:23:00.839150 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.839157 kubelet[3072]: E0527 03:23:00.839155 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.848614 containerd[1720]: time="2025-05-27T03:23:00.848574286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:00.850468 kubelet[3072]: E0527 03:23:00.850452 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.850468 kubelet[3072]: W0527 03:23:00.850464 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.850575 kubelet[3072]: E0527 03:23:00.850476 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.850619 kubelet[3072]: E0527 03:23:00.850608 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.850619 kubelet[3072]: W0527 03:23:00.850616 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.850673 kubelet[3072]: E0527 03:23:00.850642 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.850853 kubelet[3072]: E0527 03:23:00.850840 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.850904 kubelet[3072]: W0527 03:23:00.850853 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.850904 kubelet[3072]: E0527 03:23:00.850862 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.851025 containerd[1720]: time="2025-05-27T03:23:00.850966516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 27 03:23:00.851065 kubelet[3072]: E0527 03:23:00.851041 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.851065 kubelet[3072]: W0527 03:23:00.851047 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.851065 kubelet[3072]: E0527 03:23:00.851055 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.851230 kubelet[3072]: E0527 03:23:00.851218 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.851230 kubelet[3072]: W0527 03:23:00.851227 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.851287 kubelet[3072]: E0527 03:23:00.851235 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.851379 kubelet[3072]: E0527 03:23:00.851368 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.851379 kubelet[3072]: W0527 03:23:00.851377 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.851443 kubelet[3072]: E0527 03:23:00.851384 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.851571 kubelet[3072]: E0527 03:23:00.851562 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.851571 kubelet[3072]: W0527 03:23:00.851570 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.851617 kubelet[3072]: E0527 03:23:00.851577 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.851770 kubelet[3072]: E0527 03:23:00.851692 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.851770 kubelet[3072]: W0527 03:23:00.851698 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.851770 kubelet[3072]: E0527 03:23:00.851703 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.851967 kubelet[3072]: E0527 03:23:00.851959 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.852055 kubelet[3072]: W0527 03:23:00.852008 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.852055 kubelet[3072]: E0527 03:23:00.852019 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.852132 kubelet[3072]: E0527 03:23:00.852122 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.852132 kubelet[3072]: W0527 03:23:00.852129 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.852182 kubelet[3072]: E0527 03:23:00.852136 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.852256 kubelet[3072]: E0527 03:23:00.852244 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.852256 kubelet[3072]: W0527 03:23:00.852253 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.852302 kubelet[3072]: E0527 03:23:00.852260 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.852420 kubelet[3072]: E0527 03:23:00.852408 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.852420 kubelet[3072]: W0527 03:23:00.852416 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.852539 kubelet[3072]: E0527 03:23:00.852422 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.852577 kubelet[3072]: E0527 03:23:00.852564 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.852609 kubelet[3072]: W0527 03:23:00.852576 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.852609 kubelet[3072]: E0527 03:23:00.852583 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.852676 kubelet[3072]: E0527 03:23:00.852665 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.852676 kubelet[3072]: W0527 03:23:00.852669 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.852676 kubelet[3072]: E0527 03:23:00.852674 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.852761 kubelet[3072]: E0527 03:23:00.852744 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.852761 kubelet[3072]: W0527 03:23:00.852748 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.852811 kubelet[3072]: E0527 03:23:00.852759 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.852863 kubelet[3072]: E0527 03:23:00.852853 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.852863 kubelet[3072]: W0527 03:23:00.852859 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.852983 kubelet[3072]: E0527 03:23:00.852865 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.853117 kubelet[3072]: E0527 03:23:00.853107 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.853117 kubelet[3072]: W0527 03:23:00.853116 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.853173 kubelet[3072]: E0527 03:23:00.853123 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.853239 kubelet[3072]: E0527 03:23:00.853228 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:00.853239 kubelet[3072]: W0527 03:23:00.853236 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:00.853279 kubelet[3072]: E0527 03:23:00.853243 3072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:00.853654 containerd[1720]: time="2025-05-27T03:23:00.853600476Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:00.856916 containerd[1720]: time="2025-05-27T03:23:00.856862393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:00.857500 containerd[1720]: time="2025-05-27T03:23:00.857211811Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.152197948s" May 27 03:23:00.857500 containerd[1720]: time="2025-05-27T03:23:00.857239999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 27 03:23:00.862923 containerd[1720]: time="2025-05-27T03:23:00.862897749Z" level=info msg="CreateContainer within sandbox \"23d4ef23e910f683fa968cd28b903825d6fcef70307f62044717bfe7b5fb6db7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 03:23:00.879888 containerd[1720]: time="2025-05-27T03:23:00.879038232Z" level=info msg="Container 5314801a29950cf5658d6686d0550971a1564f73f1ad8e37c059725b71f4dd98: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:00.892376 containerd[1720]: time="2025-05-27T03:23:00.892352592Z" level=info msg="CreateContainer within sandbox \"23d4ef23e910f683fa968cd28b903825d6fcef70307f62044717bfe7b5fb6db7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5314801a29950cf5658d6686d0550971a1564f73f1ad8e37c059725b71f4dd98\"" May 27 03:23:00.892750 containerd[1720]: time="2025-05-27T03:23:00.892733068Z" level=info msg="StartContainer for \"5314801a29950cf5658d6686d0550971a1564f73f1ad8e37c059725b71f4dd98\"" May 27 03:23:00.893960 containerd[1720]: time="2025-05-27T03:23:00.893938259Z" level=info msg="connecting to shim 5314801a29950cf5658d6686d0550971a1564f73f1ad8e37c059725b71f4dd98" address="unix:///run/containerd/s/a1622089cfb5113524fdb7a47f37c3395067d25acd3e4d886eed3ad74652145c" protocol=ttrpc version=3 May 27 03:23:00.914034 systemd[1]: Started cri-containerd-5314801a29950cf5658d6686d0550971a1564f73f1ad8e37c059725b71f4dd98.scope - libcontainer container 5314801a29950cf5658d6686d0550971a1564f73f1ad8e37c059725b71f4dd98. May 27 03:23:00.941659 containerd[1720]: time="2025-05-27T03:23:00.941609576Z" level=info msg="StartContainer for \"5314801a29950cf5658d6686d0550971a1564f73f1ad8e37c059725b71f4dd98\" returns successfully" May 27 03:23:00.945834 systemd[1]: cri-containerd-5314801a29950cf5658d6686d0550971a1564f73f1ad8e37c059725b71f4dd98.scope: Deactivated successfully. May 27 03:23:00.949252 containerd[1720]: time="2025-05-27T03:23:00.949137876Z" level=info msg="received exit event container_id:\"5314801a29950cf5658d6686d0550971a1564f73f1ad8e37c059725b71f4dd98\" id:\"5314801a29950cf5658d6686d0550971a1564f73f1ad8e37c059725b71f4dd98\" pid:3773 exited_at:{seconds:1748316180 nanos:948784883}" May 27 03:23:00.949440 containerd[1720]: time="2025-05-27T03:23:00.949426208Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5314801a29950cf5658d6686d0550971a1564f73f1ad8e37c059725b71f4dd98\" id:\"5314801a29950cf5658d6686d0550971a1564f73f1ad8e37c059725b71f4dd98\" pid:3773 exited_at:{seconds:1748316180 nanos:948784883}" May 27 03:23:00.965857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5314801a29950cf5658d6686d0550971a1564f73f1ad8e37c059725b71f4dd98-rootfs.mount: Deactivated successfully. May 27 03:23:01.712552 kubelet[3072]: E0527 03:23:01.712113 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pft9h" podUID="aaa94a96-399c-4345-92d5-d811c3bc141a" May 27 03:23:01.787534 kubelet[3072]: I0527 03:23:01.787509 3072 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:23:01.800913 kubelet[3072]: I0527 03:23:01.800842 3072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b8db7795b-2gntd" podStartSLOduration=3.52402292 podStartE2EDuration="5.800822267s" podCreationTimestamp="2025-05-27 03:22:56 +0000 UTC" firstStartedPulling="2025-05-27 03:22:57.42750095 +0000 UTC m=+17.796412701" lastFinishedPulling="2025-05-27 03:22:59.704300299 +0000 UTC m=+20.073212048" observedRunningTime="2025-05-27 03:23:00.801275381 +0000 UTC m=+21.170187157" watchObservedRunningTime="2025-05-27 03:23:01.800822267 +0000 UTC m=+22.169734020" May 27 03:23:03.712423 kubelet[3072]: E0527 03:23:03.711572 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pft9h" podUID="aaa94a96-399c-4345-92d5-d811c3bc141a" May 27 03:23:03.793396 containerd[1720]: time="2025-05-27T03:23:03.793335889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 03:23:05.713737 kubelet[3072]: E0527 03:23:05.713691 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pft9h" podUID="aaa94a96-399c-4345-92d5-d811c3bc141a" May 27 03:23:06.158706 containerd[1720]: time="2025-05-27T03:23:06.158661040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:06.160676 containerd[1720]: time="2025-05-27T03:23:06.160641479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 27 03:23:06.162962 containerd[1720]: time="2025-05-27T03:23:06.162913800Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:06.166046 containerd[1720]: time="2025-05-27T03:23:06.166000359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:06.166525 containerd[1720]: time="2025-05-27T03:23:06.166352861Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 2.372951461s" May 27 03:23:06.166525 containerd[1720]: time="2025-05-27T03:23:06.166380985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 27 03:23:06.171995 containerd[1720]: time="2025-05-27T03:23:06.171959186Z" level=info msg="CreateContainer within sandbox \"23d4ef23e910f683fa968cd28b903825d6fcef70307f62044717bfe7b5fb6db7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 03:23:06.186826 containerd[1720]: time="2025-05-27T03:23:06.186780594Z" level=info msg="Container e21b9fbf235f99e1bcae691e68b826e6ae085a300a5b22051a326e0dca6aa521: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:06.201218 containerd[1720]: time="2025-05-27T03:23:06.201191482Z" level=info msg="CreateContainer within sandbox \"23d4ef23e910f683fa968cd28b903825d6fcef70307f62044717bfe7b5fb6db7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e21b9fbf235f99e1bcae691e68b826e6ae085a300a5b22051a326e0dca6aa521\"" May 27 03:23:06.201751 containerd[1720]: time="2025-05-27T03:23:06.201565717Z" level=info msg="StartContainer for \"e21b9fbf235f99e1bcae691e68b826e6ae085a300a5b22051a326e0dca6aa521\"" May 27 03:23:06.203025 containerd[1720]: time="2025-05-27T03:23:06.202985419Z" level=info msg="connecting to shim e21b9fbf235f99e1bcae691e68b826e6ae085a300a5b22051a326e0dca6aa521" address="unix:///run/containerd/s/a1622089cfb5113524fdb7a47f37c3395067d25acd3e4d886eed3ad74652145c" protocol=ttrpc version=3 May 27 03:23:06.226035 systemd[1]: Started cri-containerd-e21b9fbf235f99e1bcae691e68b826e6ae085a300a5b22051a326e0dca6aa521.scope - libcontainer container e21b9fbf235f99e1bcae691e68b826e6ae085a300a5b22051a326e0dca6aa521. May 27 03:23:06.257286 containerd[1720]: time="2025-05-27T03:23:06.257263415Z" level=info msg="StartContainer for \"e21b9fbf235f99e1bcae691e68b826e6ae085a300a5b22051a326e0dca6aa521\" returns successfully" May 27 03:23:06.368092 kubelet[3072]: I0527 03:23:06.368057 3072 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:23:07.364161 containerd[1720]: time="2025-05-27T03:23:07.364106059Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:23:07.366088 systemd[1]: cri-containerd-e21b9fbf235f99e1bcae691e68b826e6ae085a300a5b22051a326e0dca6aa521.scope: Deactivated successfully. May 27 03:23:07.366727 systemd[1]: cri-containerd-e21b9fbf235f99e1bcae691e68b826e6ae085a300a5b22051a326e0dca6aa521.scope: Consumed 416ms CPU time, 191.1M memory peak, 170.9M written to disk. May 27 03:23:07.369376 containerd[1720]: time="2025-05-27T03:23:07.369339687Z" level=info msg="received exit event container_id:\"e21b9fbf235f99e1bcae691e68b826e6ae085a300a5b22051a326e0dca6aa521\" id:\"e21b9fbf235f99e1bcae691e68b826e6ae085a300a5b22051a326e0dca6aa521\" pid:3835 exited_at:{seconds:1748316187 nanos:368848423}" May 27 03:23:07.369745 containerd[1720]: time="2025-05-27T03:23:07.369727220Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e21b9fbf235f99e1bcae691e68b826e6ae085a300a5b22051a326e0dca6aa521\" id:\"e21b9fbf235f99e1bcae691e68b826e6ae085a300a5b22051a326e0dca6aa521\" pid:3835 exited_at:{seconds:1748316187 nanos:368848423}" May 27 03:23:07.372037 kubelet[3072]: I0527 03:23:07.371999 3072 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 03:23:07.395872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e21b9fbf235f99e1bcae691e68b826e6ae085a300a5b22051a326e0dca6aa521-rootfs.mount: Deactivated successfully. May 27 03:23:07.780123 kubelet[3072]: I0527 03:23:07.696686 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxk5r\" (UniqueName: \"kubernetes.io/projected/473c7f23-fd99-498d-a6c5-6681dfbec009-kube-api-access-gxk5r\") pod \"coredns-674b8bbfcf-rqp8z\" (UID: \"473c7f23-fd99-498d-a6c5-6681dfbec009\") " pod="kube-system/coredns-674b8bbfcf-rqp8z" May 27 03:23:07.780123 kubelet[3072]: I0527 03:23:07.696724 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/473c7f23-fd99-498d-a6c5-6681dfbec009-config-volume\") pod \"coredns-674b8bbfcf-rqp8z\" (UID: \"473c7f23-fd99-498d-a6c5-6681dfbec009\") " pod="kube-system/coredns-674b8bbfcf-rqp8z" May 27 03:23:07.797500 kubelet[3072]: E0527 03:23:07.797413 3072 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered May 27 03:23:07.797723 kubelet[3072]: E0527 03:23:07.797636 3072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/473c7f23-fd99-498d-a6c5-6681dfbec009-config-volume podName:473c7f23-fd99-498d-a6c5-6681dfbec009 nodeName:}" failed. No retries permitted until 2025-05-27 03:23:08.29748341 +0000 UTC m=+28.666395162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/473c7f23-fd99-498d-a6c5-6681dfbec009-config-volume") pod "coredns-674b8bbfcf-rqp8z" (UID: "473c7f23-fd99-498d-a6c5-6681dfbec009") : object "kube-system"/"coredns" not registered May 27 03:23:07.944614 systemd[1]: Created slice kubepods-burstable-pod473c7f23_fd99_498d_a6c5_6681dfbec009.slice - libcontainer container kubepods-burstable-pod473c7f23_fd99_498d_a6c5_6681dfbec009.slice. May 27 03:23:07.952169 systemd[1]: Created slice kubepods-burstable-podbfdbd044_128d_4e08_85a0_8f80049c1230.slice - libcontainer container kubepods-burstable-podbfdbd044_128d_4e08_85a0_8f80049c1230.slice. May 27 03:23:07.998820 kubelet[3072]: I0527 03:23:07.998787 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tlkp\" (UniqueName: \"kubernetes.io/projected/bfdbd044-128d-4e08-85a0-8f80049c1230-kube-api-access-5tlkp\") pod \"coredns-674b8bbfcf-w998w\" (UID: \"bfdbd044-128d-4e08-85a0-8f80049c1230\") " pod="kube-system/coredns-674b8bbfcf-w998w" May 27 03:23:07.998942 kubelet[3072]: I0527 03:23:07.998854 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfdbd044-128d-4e08-85a0-8f80049c1230-config-volume\") pod \"coredns-674b8bbfcf-w998w\" (UID: \"bfdbd044-128d-4e08-85a0-8f80049c1230\") " pod="kube-system/coredns-674b8bbfcf-w998w" May 27 03:23:08.286049 containerd[1720]: time="2025-05-27T03:23:08.286002087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w998w,Uid:bfdbd044-128d-4e08-85a0-8f80049c1230,Namespace:kube-system,Attempt:0,}" May 27 03:23:08.301911 systemd[1]: Created slice kubepods-besteffort-podaaa94a96_399c_4345_92d5_d811c3bc141a.slice - libcontainer container kubepods-besteffort-podaaa94a96_399c_4345_92d5_d811c3bc141a.slice. May 27 03:23:08.316961 containerd[1720]: time="2025-05-27T03:23:08.316932516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pft9h,Uid:aaa94a96-399c-4345-92d5-d811c3bc141a,Namespace:calico-system,Attempt:0,}" May 27 03:23:08.330487 systemd[1]: Created slice kubepods-besteffort-pode3f74c0c_4c3c_46fc_b841_26a126161c3b.slice - libcontainer container kubepods-besteffort-pode3f74c0c_4c3c_46fc_b841_26a126161c3b.slice. May 27 03:23:08.341786 systemd[1]: Created slice kubepods-besteffort-poddfd983a2_0670_40aa_aa20_ca099e546438.slice - libcontainer container kubepods-besteffort-poddfd983a2_0670_40aa_aa20_ca099e546438.slice. May 27 03:23:08.351983 systemd[1]: Created slice kubepods-besteffort-pod4d81e6cc_fb55_4862_b180_e72b2e08be0e.slice - libcontainer container kubepods-besteffort-pod4d81e6cc_fb55_4862_b180_e72b2e08be0e.slice. May 27 03:23:08.365869 systemd[1]: Created slice kubepods-besteffort-pod88303983_293a_40aa_a529_14c26f7a181e.slice - libcontainer container kubepods-besteffort-pod88303983_293a_40aa_a529_14c26f7a181e.slice. May 27 03:23:08.370816 systemd[1]: Created slice kubepods-besteffort-pod2ff286f0_6dcf_479f_800b_8f5670d2a81a.slice - libcontainer container kubepods-besteffort-pod2ff286f0_6dcf_479f_800b_8f5670d2a81a.slice. May 27 03:23:08.400349 containerd[1720]: time="2025-05-27T03:23:08.400283720Z" level=error msg="Failed to destroy network for sandbox \"bd48c36968cbd70d6349fc358931f26214c481276af651644037bbcc48291be0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.402807 kubelet[3072]: I0527 03:23:08.401018 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2ff286f0-6dcf-479f-800b-8f5670d2a81a-calico-apiserver-certs\") pod \"calico-apiserver-f58557fbc-j8plw\" (UID: \"2ff286f0-6dcf-479f-800b-8f5670d2a81a\") " pod="calico-apiserver/calico-apiserver-f58557fbc-j8plw" May 27 03:23:08.402807 kubelet[3072]: I0527 03:23:08.401059 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxsj5\" (UniqueName: \"kubernetes.io/projected/2ff286f0-6dcf-479f-800b-8f5670d2a81a-kube-api-access-pxsj5\") pod \"calico-apiserver-f58557fbc-j8plw\" (UID: \"2ff286f0-6dcf-479f-800b-8f5670d2a81a\") " pod="calico-apiserver/calico-apiserver-f58557fbc-j8plw" May 27 03:23:08.402807 kubelet[3072]: I0527 03:23:08.401080 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn4c2\" (UniqueName: \"kubernetes.io/projected/4d81e6cc-fb55-4862-b180-e72b2e08be0e-kube-api-access-mn4c2\") pod \"goldmane-78d55f7ddc-mlshg\" (UID: \"4d81e6cc-fb55-4862-b180-e72b2e08be0e\") " pod="calico-system/goldmane-78d55f7ddc-mlshg" May 27 03:23:08.402807 kubelet[3072]: I0527 03:23:08.401101 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/88303983-293a-40aa-a529-14c26f7a181e-whisker-backend-key-pair\") pod \"whisker-cd498bf4d-hh2pd\" (UID: \"88303983-293a-40aa-a529-14c26f7a181e\") " pod="calico-system/whisker-cd498bf4d-hh2pd" May 27 03:23:08.402807 kubelet[3072]: I0527 03:23:08.401120 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6bwb\" (UniqueName: \"kubernetes.io/projected/dfd983a2-0670-40aa-aa20-ca099e546438-kube-api-access-s6bwb\") pod \"calico-apiserver-f58557fbc-zdhkv\" (UID: \"dfd983a2-0670-40aa-aa20-ca099e546438\") " pod="calico-apiserver/calico-apiserver-f58557fbc-zdhkv" May 27 03:23:08.402138 systemd[1]: run-netns-cni\x2d309aa6f3\x2d7275\x2dbfe4\x2dd12d\x2da68f74126e0c.mount: Deactivated successfully. May 27 03:23:08.403259 kubelet[3072]: I0527 03:23:08.401137 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d81e6cc-fb55-4862-b180-e72b2e08be0e-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-mlshg\" (UID: \"4d81e6cc-fb55-4862-b180-e72b2e08be0e\") " pod="calico-system/goldmane-78d55f7ddc-mlshg" May 27 03:23:08.403259 kubelet[3072]: I0527 03:23:08.401162 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3f74c0c-4c3c-46fc-b841-26a126161c3b-tigera-ca-bundle\") pod \"calico-kube-controllers-8994977dd-fcvt7\" (UID: \"e3f74c0c-4c3c-46fc-b841-26a126161c3b\") " pod="calico-system/calico-kube-controllers-8994977dd-fcvt7" May 27 03:23:08.403259 kubelet[3072]: I0527 03:23:08.401187 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88303983-293a-40aa-a529-14c26f7a181e-whisker-ca-bundle\") pod \"whisker-cd498bf4d-hh2pd\" (UID: \"88303983-293a-40aa-a529-14c26f7a181e\") " pod="calico-system/whisker-cd498bf4d-hh2pd" May 27 03:23:08.403259 kubelet[3072]: I0527 03:23:08.401209 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppc48\" (UniqueName: \"kubernetes.io/projected/e3f74c0c-4c3c-46fc-b841-26a126161c3b-kube-api-access-ppc48\") pod \"calico-kube-controllers-8994977dd-fcvt7\" (UID: \"e3f74c0c-4c3c-46fc-b841-26a126161c3b\") " pod="calico-system/calico-kube-controllers-8994977dd-fcvt7" May 27 03:23:08.403259 kubelet[3072]: I0527 03:23:08.401235 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dfd983a2-0670-40aa-aa20-ca099e546438-calico-apiserver-certs\") pod \"calico-apiserver-f58557fbc-zdhkv\" (UID: \"dfd983a2-0670-40aa-aa20-ca099e546438\") " pod="calico-apiserver/calico-apiserver-f58557fbc-zdhkv" May 27 03:23:08.403377 kubelet[3072]: I0527 03:23:08.401257 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d81e6cc-fb55-4862-b180-e72b2e08be0e-config\") pod \"goldmane-78d55f7ddc-mlshg\" (UID: \"4d81e6cc-fb55-4862-b180-e72b2e08be0e\") " pod="calico-system/goldmane-78d55f7ddc-mlshg" May 27 03:23:08.403377 kubelet[3072]: I0527 03:23:08.401281 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzg68\" (UniqueName: \"kubernetes.io/projected/88303983-293a-40aa-a529-14c26f7a181e-kube-api-access-wzg68\") pod \"whisker-cd498bf4d-hh2pd\" (UID: \"88303983-293a-40aa-a529-14c26f7a181e\") " pod="calico-system/whisker-cd498bf4d-hh2pd" May 27 03:23:08.403377 kubelet[3072]: I0527 03:23:08.401299 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4d81e6cc-fb55-4862-b180-e72b2e08be0e-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-mlshg\" (UID: \"4d81e6cc-fb55-4862-b180-e72b2e08be0e\") " pod="calico-system/goldmane-78d55f7ddc-mlshg" May 27 03:23:08.405516 containerd[1720]: time="2025-05-27T03:23:08.405473048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w998w,Uid:bfdbd044-128d-4e08-85a0-8f80049c1230,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd48c36968cbd70d6349fc358931f26214c481276af651644037bbcc48291be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.405718 kubelet[3072]: E0527 03:23:08.405686 3072 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd48c36968cbd70d6349fc358931f26214c481276af651644037bbcc48291be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.405773 kubelet[3072]: E0527 03:23:08.405760 3072 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd48c36968cbd70d6349fc358931f26214c481276af651644037bbcc48291be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-w998w" May 27 03:23:08.405805 kubelet[3072]: E0527 03:23:08.405782 3072 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd48c36968cbd70d6349fc358931f26214c481276af651644037bbcc48291be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-w998w" May 27 03:23:08.405927 kubelet[3072]: E0527 03:23:08.405868 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-w998w_kube-system(bfdbd044-128d-4e08-85a0-8f80049c1230)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-w998w_kube-system(bfdbd044-128d-4e08-85a0-8f80049c1230)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd48c36968cbd70d6349fc358931f26214c481276af651644037bbcc48291be0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-w998w" podUID="bfdbd044-128d-4e08-85a0-8f80049c1230" May 27 03:23:08.414655 containerd[1720]: time="2025-05-27T03:23:08.414614988Z" level=error msg="Failed to destroy network for sandbox \"f6fbc721f3683f80c426e38d86ae4120aefc43c7a3843a0b48a244ba3cfa2b1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.416215 systemd[1]: run-netns-cni\x2dc3b00cf1\x2d3e40\x2d80b8\x2d997d\x2d76c4f6f5464a.mount: Deactivated successfully. May 27 03:23:08.419261 containerd[1720]: time="2025-05-27T03:23:08.419231358Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pft9h,Uid:aaa94a96-399c-4345-92d5-d811c3bc141a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6fbc721f3683f80c426e38d86ae4120aefc43c7a3843a0b48a244ba3cfa2b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.419409 kubelet[3072]: E0527 03:23:08.419385 3072 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6fbc721f3683f80c426e38d86ae4120aefc43c7a3843a0b48a244ba3cfa2b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.419454 kubelet[3072]: E0527 03:23:08.419427 3072 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6fbc721f3683f80c426e38d86ae4120aefc43c7a3843a0b48a244ba3cfa2b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pft9h" May 27 03:23:08.419454 kubelet[3072]: E0527 03:23:08.419446 3072 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6fbc721f3683f80c426e38d86ae4120aefc43c7a3843a0b48a244ba3cfa2b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pft9h" May 27 03:23:08.419522 kubelet[3072]: E0527 03:23:08.419486 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pft9h_calico-system(aaa94a96-399c-4345-92d5-d811c3bc141a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pft9h_calico-system(aaa94a96-399c-4345-92d5-d811c3bc141a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6fbc721f3683f80c426e38d86ae4120aefc43c7a3843a0b48a244ba3cfa2b1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pft9h" podUID="aaa94a96-399c-4345-92d5-d811c3bc141a" May 27 03:23:08.548762 containerd[1720]: time="2025-05-27T03:23:08.548676269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rqp8z,Uid:473c7f23-fd99-498d-a6c5-6681dfbec009,Namespace:kube-system,Attempt:0,}" May 27 03:23:08.588149 containerd[1720]: time="2025-05-27T03:23:08.588113223Z" level=error msg="Failed to destroy network for sandbox \"4aff58917f533f076a7269dd0cad0ba02e6a9a96a8aad969df6ef28829efcadb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.590612 containerd[1720]: time="2025-05-27T03:23:08.590579153Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rqp8z,Uid:473c7f23-fd99-498d-a6c5-6681dfbec009,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aff58917f533f076a7269dd0cad0ba02e6a9a96a8aad969df6ef28829efcadb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.590788 kubelet[3072]: E0527 03:23:08.590722 3072 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aff58917f533f076a7269dd0cad0ba02e6a9a96a8aad969df6ef28829efcadb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.590788 kubelet[3072]: E0527 03:23:08.590768 3072 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aff58917f533f076a7269dd0cad0ba02e6a9a96a8aad969df6ef28829efcadb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rqp8z" May 27 03:23:08.590917 kubelet[3072]: E0527 03:23:08.590786 3072 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aff58917f533f076a7269dd0cad0ba02e6a9a96a8aad969df6ef28829efcadb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rqp8z" May 27 03:23:08.590917 kubelet[3072]: E0527 03:23:08.590832 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rqp8z_kube-system(473c7f23-fd99-498d-a6c5-6681dfbec009)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rqp8z_kube-system(473c7f23-fd99-498d-a6c5-6681dfbec009)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4aff58917f533f076a7269dd0cad0ba02e6a9a96a8aad969df6ef28829efcadb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rqp8z" podUID="473c7f23-fd99-498d-a6c5-6681dfbec009" May 27 03:23:08.637660 containerd[1720]: time="2025-05-27T03:23:08.637629479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8994977dd-fcvt7,Uid:e3f74c0c-4c3c-46fc-b841-26a126161c3b,Namespace:calico-system,Attempt:0,}" May 27 03:23:08.648561 containerd[1720]: time="2025-05-27T03:23:08.648491321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f58557fbc-zdhkv,Uid:dfd983a2-0670-40aa-aa20-ca099e546438,Namespace:calico-apiserver,Attempt:0,}" May 27 03:23:08.659079 containerd[1720]: time="2025-05-27T03:23:08.659028535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-mlshg,Uid:4d81e6cc-fb55-4862-b180-e72b2e08be0e,Namespace:calico-system,Attempt:0,}" May 27 03:23:08.671906 containerd[1720]: time="2025-05-27T03:23:08.671868172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cd498bf4d-hh2pd,Uid:88303983-293a-40aa-a529-14c26f7a181e,Namespace:calico-system,Attempt:0,}" May 27 03:23:08.677759 containerd[1720]: time="2025-05-27T03:23:08.677725066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f58557fbc-j8plw,Uid:2ff286f0-6dcf-479f-800b-8f5670d2a81a,Namespace:calico-apiserver,Attempt:0,}" May 27 03:23:08.690710 containerd[1720]: time="2025-05-27T03:23:08.690558629Z" level=error msg="Failed to destroy network for sandbox \"617bbecd7eb4b78fc499c6b9c7452bc00c246c307d6e735fcdecfa2a4d60406a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.700956 containerd[1720]: time="2025-05-27T03:23:08.700924416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8994977dd-fcvt7,Uid:e3f74c0c-4c3c-46fc-b841-26a126161c3b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"617bbecd7eb4b78fc499c6b9c7452bc00c246c307d6e735fcdecfa2a4d60406a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.701389 kubelet[3072]: E0527 03:23:08.701293 3072 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"617bbecd7eb4b78fc499c6b9c7452bc00c246c307d6e735fcdecfa2a4d60406a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.701389 kubelet[3072]: E0527 03:23:08.701340 3072 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"617bbecd7eb4b78fc499c6b9c7452bc00c246c307d6e735fcdecfa2a4d60406a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8994977dd-fcvt7" May 27 03:23:08.701389 kubelet[3072]: E0527 03:23:08.701361 3072 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"617bbecd7eb4b78fc499c6b9c7452bc00c246c307d6e735fcdecfa2a4d60406a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8994977dd-fcvt7" May 27 03:23:08.701790 kubelet[3072]: E0527 03:23:08.701537 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8994977dd-fcvt7_calico-system(e3f74c0c-4c3c-46fc-b841-26a126161c3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8994977dd-fcvt7_calico-system(e3f74c0c-4c3c-46fc-b841-26a126161c3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"617bbecd7eb4b78fc499c6b9c7452bc00c246c307d6e735fcdecfa2a4d60406a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8994977dd-fcvt7" podUID="e3f74c0c-4c3c-46fc-b841-26a126161c3b" May 27 03:23:08.732961 containerd[1720]: time="2025-05-27T03:23:08.732936492Z" level=error msg="Failed to destroy network for sandbox \"b9e93124b98aa95f852cb7adef7ac45a3f6a7771d60cb3e4bac06969132ebc11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.740692 containerd[1720]: time="2025-05-27T03:23:08.740579321Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f58557fbc-zdhkv,Uid:dfd983a2-0670-40aa-aa20-ca099e546438,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9e93124b98aa95f852cb7adef7ac45a3f6a7771d60cb3e4bac06969132ebc11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.740939 kubelet[3072]: E0527 03:23:08.740770 3072 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9e93124b98aa95f852cb7adef7ac45a3f6a7771d60cb3e4bac06969132ebc11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.741078 kubelet[3072]: E0527 03:23:08.740959 3072 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9e93124b98aa95f852cb7adef7ac45a3f6a7771d60cb3e4bac06969132ebc11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f58557fbc-zdhkv" May 27 03:23:08.741078 kubelet[3072]: E0527 03:23:08.740980 3072 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9e93124b98aa95f852cb7adef7ac45a3f6a7771d60cb3e4bac06969132ebc11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f58557fbc-zdhkv" May 27 03:23:08.741078 kubelet[3072]: E0527 03:23:08.741030 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f58557fbc-zdhkv_calico-apiserver(dfd983a2-0670-40aa-aa20-ca099e546438)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f58557fbc-zdhkv_calico-apiserver(dfd983a2-0670-40aa-aa20-ca099e546438)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9e93124b98aa95f852cb7adef7ac45a3f6a7771d60cb3e4bac06969132ebc11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f58557fbc-zdhkv" podUID="dfd983a2-0670-40aa-aa20-ca099e546438" May 27 03:23:08.747528 containerd[1720]: time="2025-05-27T03:23:08.747501501Z" level=error msg="Failed to destroy network for sandbox \"881c1b46e001d6504405573ea7937fdc95d4d2e48937a3bc1ff189222ff69cf4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.749954 containerd[1720]: time="2025-05-27T03:23:08.749835636Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-mlshg,Uid:4d81e6cc-fb55-4862-b180-e72b2e08be0e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"881c1b46e001d6504405573ea7937fdc95d4d2e48937a3bc1ff189222ff69cf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.750409 kubelet[3072]: E0527 03:23:08.750126 3072 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"881c1b46e001d6504405573ea7937fdc95d4d2e48937a3bc1ff189222ff69cf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.750409 kubelet[3072]: E0527 03:23:08.750178 3072 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"881c1b46e001d6504405573ea7937fdc95d4d2e48937a3bc1ff189222ff69cf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-mlshg" May 27 03:23:08.750409 kubelet[3072]: E0527 03:23:08.750197 3072 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"881c1b46e001d6504405573ea7937fdc95d4d2e48937a3bc1ff189222ff69cf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-mlshg" May 27 03:23:08.750517 kubelet[3072]: E0527 03:23:08.750245 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-mlshg_calico-system(4d81e6cc-fb55-4862-b180-e72b2e08be0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-mlshg_calico-system(4d81e6cc-fb55-4862-b180-e72b2e08be0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"881c1b46e001d6504405573ea7937fdc95d4d2e48937a3bc1ff189222ff69cf4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-mlshg" podUID="4d81e6cc-fb55-4862-b180-e72b2e08be0e" May 27 03:23:08.766462 containerd[1720]: time="2025-05-27T03:23:08.766389888Z" level=error msg="Failed to destroy network for sandbox \"a4da8259b25d20f6375945761f26977f53191f23977b2ac4f183536a2a4a8219\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.768916 containerd[1720]: time="2025-05-27T03:23:08.768857993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f58557fbc-j8plw,Uid:2ff286f0-6dcf-479f-800b-8f5670d2a81a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4da8259b25d20f6375945761f26977f53191f23977b2ac4f183536a2a4a8219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.769125 kubelet[3072]: E0527 03:23:08.769103 3072 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4da8259b25d20f6375945761f26977f53191f23977b2ac4f183536a2a4a8219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.769456 kubelet[3072]: E0527 03:23:08.769151 3072 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4da8259b25d20f6375945761f26977f53191f23977b2ac4f183536a2a4a8219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f58557fbc-j8plw" May 27 03:23:08.769456 kubelet[3072]: E0527 03:23:08.769173 3072 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4da8259b25d20f6375945761f26977f53191f23977b2ac4f183536a2a4a8219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f58557fbc-j8plw" May 27 03:23:08.769456 kubelet[3072]: E0527 03:23:08.769217 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f58557fbc-j8plw_calico-apiserver(2ff286f0-6dcf-479f-800b-8f5670d2a81a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f58557fbc-j8plw_calico-apiserver(2ff286f0-6dcf-479f-800b-8f5670d2a81a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4da8259b25d20f6375945761f26977f53191f23977b2ac4f183536a2a4a8219\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f58557fbc-j8plw" podUID="2ff286f0-6dcf-479f-800b-8f5670d2a81a" May 27 03:23:08.771242 containerd[1720]: time="2025-05-27T03:23:08.771166991Z" level=error msg="Failed to destroy network for sandbox \"bd4bde9df3c97c868dbde703966a154eef1168ae244021c23ca3c84281f76494\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.773593 containerd[1720]: time="2025-05-27T03:23:08.773569484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cd498bf4d-hh2pd,Uid:88303983-293a-40aa-a529-14c26f7a181e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd4bde9df3c97c868dbde703966a154eef1168ae244021c23ca3c84281f76494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.773728 kubelet[3072]: E0527 03:23:08.773708 3072 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd4bde9df3c97c868dbde703966a154eef1168ae244021c23ca3c84281f76494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:08.773765 kubelet[3072]: E0527 03:23:08.773746 3072 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd4bde9df3c97c868dbde703966a154eef1168ae244021c23ca3c84281f76494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-cd498bf4d-hh2pd" May 27 03:23:08.773789 kubelet[3072]: E0527 03:23:08.773764 3072 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd4bde9df3c97c868dbde703966a154eef1168ae244021c23ca3c84281f76494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-cd498bf4d-hh2pd" May 27 03:23:08.773833 kubelet[3072]: E0527 03:23:08.773811 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-cd498bf4d-hh2pd_calico-system(88303983-293a-40aa-a529-14c26f7a181e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-cd498bf4d-hh2pd_calico-system(88303983-293a-40aa-a529-14c26f7a181e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd4bde9df3c97c868dbde703966a154eef1168ae244021c23ca3c84281f76494\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-cd498bf4d-hh2pd" podUID="88303983-293a-40aa-a529-14c26f7a181e" May 27 03:23:08.807680 containerd[1720]: time="2025-05-27T03:23:08.807617171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 03:23:14.936871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount751155337.mount: Deactivated successfully. May 27 03:23:14.991480 containerd[1720]: time="2025-05-27T03:23:14.991430010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:14.993452 containerd[1720]: time="2025-05-27T03:23:14.993415961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 03:23:14.995911 containerd[1720]: time="2025-05-27T03:23:14.995861993Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:14.998777 containerd[1720]: time="2025-05-27T03:23:14.998740786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:14.999194 containerd[1720]: time="2025-05-27T03:23:14.999064241Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 6.191416725s" May 27 03:23:14.999194 containerd[1720]: time="2025-05-27T03:23:14.999096068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 27 03:23:15.011135 containerd[1720]: time="2025-05-27T03:23:15.011104772Z" level=info msg="CreateContainer within sandbox \"23d4ef23e910f683fa968cd28b903825d6fcef70307f62044717bfe7b5fb6db7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 03:23:15.034794 containerd[1720]: time="2025-05-27T03:23:15.033037225Z" level=info msg="Container 99a9aebb958b59ce220b6a8c39e24a46322b7207e04a6237da90672aa9320dd1: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:15.036469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3301780353.mount: Deactivated successfully. May 27 03:23:15.048980 containerd[1720]: time="2025-05-27T03:23:15.048951094Z" level=info msg="CreateContainer within sandbox \"23d4ef23e910f683fa968cd28b903825d6fcef70307f62044717bfe7b5fb6db7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"99a9aebb958b59ce220b6a8c39e24a46322b7207e04a6237da90672aa9320dd1\"" May 27 03:23:15.049489 containerd[1720]: time="2025-05-27T03:23:15.049468668Z" level=info msg="StartContainer for \"99a9aebb958b59ce220b6a8c39e24a46322b7207e04a6237da90672aa9320dd1\"" May 27 03:23:15.051313 containerd[1720]: time="2025-05-27T03:23:15.050973826Z" level=info msg="connecting to shim 99a9aebb958b59ce220b6a8c39e24a46322b7207e04a6237da90672aa9320dd1" address="unix:///run/containerd/s/a1622089cfb5113524fdb7a47f37c3395067d25acd3e4d886eed3ad74652145c" protocol=ttrpc version=3 May 27 03:23:15.072045 systemd[1]: Started cri-containerd-99a9aebb958b59ce220b6a8c39e24a46322b7207e04a6237da90672aa9320dd1.scope - libcontainer container 99a9aebb958b59ce220b6a8c39e24a46322b7207e04a6237da90672aa9320dd1. May 27 03:23:15.103299 containerd[1720]: time="2025-05-27T03:23:15.103228655Z" level=info msg="StartContainer for \"99a9aebb958b59ce220b6a8c39e24a46322b7207e04a6237da90672aa9320dd1\" returns successfully" May 27 03:23:15.420410 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 03:23:15.420513 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 03:23:15.538476 kubelet[3072]: I0527 03:23:15.538320 3072 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88303983-293a-40aa-a529-14c26f7a181e-whisker-ca-bundle\") pod \"88303983-293a-40aa-a529-14c26f7a181e\" (UID: \"88303983-293a-40aa-a529-14c26f7a181e\") " May 27 03:23:15.538476 kubelet[3072]: I0527 03:23:15.538452 3072 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/88303983-293a-40aa-a529-14c26f7a181e-whisker-backend-key-pair\") pod \"88303983-293a-40aa-a529-14c26f7a181e\" (UID: \"88303983-293a-40aa-a529-14c26f7a181e\") " May 27 03:23:15.538847 kubelet[3072]: I0527 03:23:15.538694 3072 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88303983-293a-40aa-a529-14c26f7a181e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "88303983-293a-40aa-a529-14c26f7a181e" (UID: "88303983-293a-40aa-a529-14c26f7a181e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 03:23:15.539716 kubelet[3072]: I0527 03:23:15.539180 3072 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzg68\" (UniqueName: \"kubernetes.io/projected/88303983-293a-40aa-a529-14c26f7a181e-kube-api-access-wzg68\") pod \"88303983-293a-40aa-a529-14c26f7a181e\" (UID: \"88303983-293a-40aa-a529-14c26f7a181e\") " May 27 03:23:15.539716 kubelet[3072]: I0527 03:23:15.539340 3072 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88303983-293a-40aa-a529-14c26f7a181e-whisker-ca-bundle\") on node \"ci-4344.0.0-a-98ca04e8ee\" DevicePath \"\"" May 27 03:23:15.543086 kubelet[3072]: I0527 03:23:15.543022 3072 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88303983-293a-40aa-a529-14c26f7a181e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "88303983-293a-40aa-a529-14c26f7a181e" (UID: "88303983-293a-40aa-a529-14c26f7a181e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 03:23:15.546060 kubelet[3072]: I0527 03:23:15.546008 3072 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88303983-293a-40aa-a529-14c26f7a181e-kube-api-access-wzg68" (OuterVolumeSpecName: "kube-api-access-wzg68") pod "88303983-293a-40aa-a529-14c26f7a181e" (UID: "88303983-293a-40aa-a529-14c26f7a181e"). InnerVolumeSpecName "kube-api-access-wzg68". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:23:15.639758 kubelet[3072]: I0527 03:23:15.639738 3072 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/88303983-293a-40aa-a529-14c26f7a181e-whisker-backend-key-pair\") on node \"ci-4344.0.0-a-98ca04e8ee\" DevicePath \"\"" May 27 03:23:15.639758 kubelet[3072]: I0527 03:23:15.639758 3072 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wzg68\" (UniqueName: \"kubernetes.io/projected/88303983-293a-40aa-a529-14c26f7a181e-kube-api-access-wzg68\") on node \"ci-4344.0.0-a-98ca04e8ee\" DevicePath \"\"" May 27 03:23:15.715578 systemd[1]: Removed slice kubepods-besteffort-pod88303983_293a_40aa_a529_14c26f7a181e.slice - libcontainer container kubepods-besteffort-pod88303983_293a_40aa_a529_14c26f7a181e.slice. May 27 03:23:15.839399 kubelet[3072]: I0527 03:23:15.839343 3072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gx8q5" podStartSLOduration=1.433913166 podStartE2EDuration="18.839237007s" podCreationTimestamp="2025-05-27 03:22:57 +0000 UTC" firstStartedPulling="2025-05-27 03:22:57.594434713 +0000 UTC m=+17.963346469" lastFinishedPulling="2025-05-27 03:23:14.999758561 +0000 UTC m=+35.368670310" observedRunningTime="2025-05-27 03:23:15.837582833 +0000 UTC m=+36.206494585" watchObservedRunningTime="2025-05-27 03:23:15.839237007 +0000 UTC m=+36.208148763" May 27 03:23:15.897922 systemd[1]: Created slice kubepods-besteffort-pod8c6da0c4_086c_499a_b60b_ce24550cd879.slice - libcontainer container kubepods-besteffort-pod8c6da0c4_086c_499a_b60b_ce24550cd879.slice. May 27 03:23:15.937119 systemd[1]: var-lib-kubelet-pods-88303983\x2d293a\x2d40aa\x2da529\x2d14c26f7a181e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwzg68.mount: Deactivated successfully. May 27 03:23:15.937206 systemd[1]: var-lib-kubelet-pods-88303983\x2d293a\x2d40aa\x2da529\x2d14c26f7a181e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 03:23:15.941693 kubelet[3072]: I0527 03:23:15.941671 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8c6da0c4-086c-499a-b60b-ce24550cd879-whisker-backend-key-pair\") pod \"whisker-6bdfd48f46-dwlzl\" (UID: \"8c6da0c4-086c-499a-b60b-ce24550cd879\") " pod="calico-system/whisker-6bdfd48f46-dwlzl" May 27 03:23:15.941762 kubelet[3072]: I0527 03:23:15.941705 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c6da0c4-086c-499a-b60b-ce24550cd879-whisker-ca-bundle\") pod \"whisker-6bdfd48f46-dwlzl\" (UID: \"8c6da0c4-086c-499a-b60b-ce24550cd879\") " pod="calico-system/whisker-6bdfd48f46-dwlzl" May 27 03:23:15.941762 kubelet[3072]: I0527 03:23:15.941724 3072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68kpg\" (UniqueName: \"kubernetes.io/projected/8c6da0c4-086c-499a-b60b-ce24550cd879-kube-api-access-68kpg\") pod \"whisker-6bdfd48f46-dwlzl\" (UID: \"8c6da0c4-086c-499a-b60b-ce24550cd879\") " pod="calico-system/whisker-6bdfd48f46-dwlzl" May 27 03:23:16.204990 containerd[1720]: time="2025-05-27T03:23:16.204934234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bdfd48f46-dwlzl,Uid:8c6da0c4-086c-499a-b60b-ce24550cd879,Namespace:calico-system,Attempt:0,}" May 27 03:23:16.292345 systemd-networkd[1599]: cali5403f1496f6: Link UP May 27 03:23:16.294043 systemd-networkd[1599]: cali5403f1496f6: Gained carrier May 27 03:23:16.306360 containerd[1720]: 2025-05-27 03:23:16.229 [INFO][4160] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 03:23:16.306360 containerd[1720]: 2025-05-27 03:23:16.236 [INFO][4160] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--98ca04e8ee-k8s-whisker--6bdfd48f46--dwlzl-eth0 whisker-6bdfd48f46- calico-system 8c6da0c4-086c-499a-b60b-ce24550cd879 876 0 2025-05-27 03:23:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6bdfd48f46 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344.0.0-a-98ca04e8ee whisker-6bdfd48f46-dwlzl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5403f1496f6 [] [] }} ContainerID="134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4" Namespace="calico-system" Pod="whisker-6bdfd48f46-dwlzl" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-whisker--6bdfd48f46--dwlzl-" May 27 03:23:16.306360 containerd[1720]: 2025-05-27 03:23:16.236 [INFO][4160] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4" Namespace="calico-system" Pod="whisker-6bdfd48f46-dwlzl" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-whisker--6bdfd48f46--dwlzl-eth0" May 27 03:23:16.306360 containerd[1720]: 2025-05-27 03:23:16.256 [INFO][4171] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4" HandleID="k8s-pod-network.134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-whisker--6bdfd48f46--dwlzl-eth0" May 27 03:23:16.306768 containerd[1720]: 2025-05-27 03:23:16.256 [INFO][4171] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4" HandleID="k8s-pod-network.134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-whisker--6bdfd48f46--dwlzl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9050), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.0.0-a-98ca04e8ee", "pod":"whisker-6bdfd48f46-dwlzl", "timestamp":"2025-05-27 03:23:16.256473628 +0000 UTC"}, Hostname:"ci-4344.0.0-a-98ca04e8ee", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:23:16.306768 containerd[1720]: 2025-05-27 03:23:16.256 [INFO][4171] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:23:16.306768 containerd[1720]: 2025-05-27 03:23:16.256 [INFO][4171] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:23:16.306768 containerd[1720]: 2025-05-27 03:23:16.256 [INFO][4171] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-98ca04e8ee' May 27 03:23:16.306768 containerd[1720]: 2025-05-27 03:23:16.261 [INFO][4171] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:16.306768 containerd[1720]: 2025-05-27 03:23:16.264 [INFO][4171] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:16.306768 containerd[1720]: 2025-05-27 03:23:16.268 [INFO][4171] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:16.306768 containerd[1720]: 2025-05-27 03:23:16.269 [INFO][4171] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:16.306768 containerd[1720]: 2025-05-27 03:23:16.271 [INFO][4171] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:16.306979 containerd[1720]: 2025-05-27 03:23:16.271 [INFO][4171] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:16.306979 containerd[1720]: 2025-05-27 03:23:16.272 [INFO][4171] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4 May 27 03:23:16.306979 containerd[1720]: 2025-05-27 03:23:16.277 [INFO][4171] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:16.306979 containerd[1720]: 2025-05-27 03:23:16.280 [INFO][4171] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.12.129/26] block=192.168.12.128/26 handle="k8s-pod-network.134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:16.306979 containerd[1720]: 2025-05-27 03:23:16.281 [INFO][4171] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.129/26] handle="k8s-pod-network.134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:16.306979 containerd[1720]: 2025-05-27 03:23:16.281 [INFO][4171] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:23:16.306979 containerd[1720]: 2025-05-27 03:23:16.281 [INFO][4171] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.129/26] IPv6=[] ContainerID="134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4" HandleID="k8s-pod-network.134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-whisker--6bdfd48f46--dwlzl-eth0" May 27 03:23:16.307109 containerd[1720]: 2025-05-27 03:23:16.283 [INFO][4160] cni-plugin/k8s.go 418: Populated endpoint ContainerID="134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4" Namespace="calico-system" Pod="whisker-6bdfd48f46-dwlzl" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-whisker--6bdfd48f46--dwlzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--98ca04e8ee-k8s-whisker--6bdfd48f46--dwlzl-eth0", GenerateName:"whisker-6bdfd48f46-", Namespace:"calico-system", SelfLink:"", UID:"8c6da0c4-086c-499a-b60b-ce24550cd879", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bdfd48f46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-98ca04e8ee", ContainerID:"", Pod:"whisker-6bdfd48f46-dwlzl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.12.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5403f1496f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:23:16.307109 containerd[1720]: 2025-05-27 03:23:16.283 [INFO][4160] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.129/32] ContainerID="134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4" Namespace="calico-system" Pod="whisker-6bdfd48f46-dwlzl" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-whisker--6bdfd48f46--dwlzl-eth0" May 27 03:23:16.307183 containerd[1720]: 2025-05-27 03:23:16.283 [INFO][4160] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5403f1496f6 ContainerID="134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4" Namespace="calico-system" Pod="whisker-6bdfd48f46-dwlzl" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-whisker--6bdfd48f46--dwlzl-eth0" May 27 03:23:16.307183 containerd[1720]: 2025-05-27 03:23:16.293 [INFO][4160] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4" Namespace="calico-system" Pod="whisker-6bdfd48f46-dwlzl" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-whisker--6bdfd48f46--dwlzl-eth0" May 27 03:23:16.307224 containerd[1720]: 2025-05-27 03:23:16.293 [INFO][4160] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4" Namespace="calico-system" Pod="whisker-6bdfd48f46-dwlzl" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-whisker--6bdfd48f46--dwlzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--98ca04e8ee-k8s-whisker--6bdfd48f46--dwlzl-eth0", GenerateName:"whisker-6bdfd48f46-", Namespace:"calico-system", SelfLink:"", UID:"8c6da0c4-086c-499a-b60b-ce24550cd879", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bdfd48f46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-98ca04e8ee", ContainerID:"134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4", Pod:"whisker-6bdfd48f46-dwlzl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.12.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5403f1496f6", MAC:"42:2c:5b:dc:05:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:23:16.307275 containerd[1720]: 2025-05-27 03:23:16.304 [INFO][4160] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4" Namespace="calico-system" Pod="whisker-6bdfd48f46-dwlzl" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-whisker--6bdfd48f46--dwlzl-eth0" May 27 03:23:16.340665 containerd[1720]: time="2025-05-27T03:23:16.340589086Z" level=info msg="connecting to shim 134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4" address="unix:///run/containerd/s/bfc06138b9344312c2dc096087dfed3ca38fa44d83da5e1cc3994fd57c71ee70" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:16.356013 systemd[1]: Started cri-containerd-134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4.scope - libcontainer container 134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4. May 27 03:23:16.389773 containerd[1720]: time="2025-05-27T03:23:16.389751291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bdfd48f46-dwlzl,Uid:8c6da0c4-086c-499a-b60b-ce24550cd879,Namespace:calico-system,Attempt:0,} returns sandbox id \"134e7212510e64306b859cb7074d51cd35b88639d11add262cf914b6b91bbba4\"" May 27 03:23:16.391126 containerd[1720]: time="2025-05-27T03:23:16.391035313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:23:16.580263 containerd[1720]: time="2025-05-27T03:23:16.580133540Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:23:16.583330 containerd[1720]: time="2025-05-27T03:23:16.583273464Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:23:16.583444 containerd[1720]: time="2025-05-27T03:23:16.583291938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:23:16.583560 kubelet[3072]: E0527 03:23:16.583509 3072 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:23:16.583904 kubelet[3072]: E0527 03:23:16.583584 3072 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:23:16.584085 kubelet[3072]: E0527 03:23:16.584040 3072 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:db16f07b00b84e98ae55f4b0d0db1dcc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-68kpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bdfd48f46-dwlzl_calico-system(8c6da0c4-086c-499a-b60b-ce24550cd879): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:23:16.586294 containerd[1720]: time="2025-05-27T03:23:16.586268558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:23:16.747897 containerd[1720]: time="2025-05-27T03:23:16.747782184Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:23:16.750533 containerd[1720]: time="2025-05-27T03:23:16.750505155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:23:16.750706 containerd[1720]: time="2025-05-27T03:23:16.750516121Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:23:16.751916 kubelet[3072]: E0527 03:23:16.750997 3072 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:23:16.751916 kubelet[3072]: E0527 03:23:16.751100 3072 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:23:16.752272 kubelet[3072]: E0527 03:23:16.752229 3072 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68kpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bdfd48f46-dwlzl_calico-system(8c6da0c4-086c-499a-b60b-ce24550cd879): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:23:16.753745 kubelet[3072]: E0527 03:23:16.753699 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6bdfd48f46-dwlzl" podUID="8c6da0c4-086c-499a-b60b-ce24550cd879" May 27 03:23:16.830490 kubelet[3072]: E0527 03:23:16.830338 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6bdfd48f46-dwlzl" podUID="8c6da0c4-086c-499a-b60b-ce24550cd879" May 27 03:23:17.219652 systemd-networkd[1599]: vxlan.calico: Link UP May 27 03:23:17.219659 systemd-networkd[1599]: vxlan.calico: Gained carrier May 27 03:23:17.693993 systemd-networkd[1599]: cali5403f1496f6: Gained IPv6LL May 27 03:23:17.714363 kubelet[3072]: I0527 03:23:17.714328 3072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88303983-293a-40aa-a529-14c26f7a181e" path="/var/lib/kubelet/pods/88303983-293a-40aa-a529-14c26f7a181e/volumes" May 27 03:23:17.830851 kubelet[3072]: E0527 03:23:17.830746 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6bdfd48f46-dwlzl" podUID="8c6da0c4-086c-499a-b60b-ce24550cd879" May 27 03:23:18.590063 systemd-networkd[1599]: vxlan.calico: Gained IPv6LL May 27 03:23:20.713147 containerd[1720]: time="2025-05-27T03:23:20.712835461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rqp8z,Uid:473c7f23-fd99-498d-a6c5-6681dfbec009,Namespace:kube-system,Attempt:0,}" May 27 03:23:20.713147 containerd[1720]: time="2025-05-27T03:23:20.712906493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-mlshg,Uid:4d81e6cc-fb55-4862-b180-e72b2e08be0e,Namespace:calico-system,Attempt:0,}" May 27 03:23:20.713147 containerd[1720]: time="2025-05-27T03:23:20.712835274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f58557fbc-j8plw,Uid:2ff286f0-6dcf-479f-800b-8f5670d2a81a,Namespace:calico-apiserver,Attempt:0,}" May 27 03:23:20.713757 containerd[1720]: time="2025-05-27T03:23:20.713306901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f58557fbc-zdhkv,Uid:dfd983a2-0670-40aa-aa20-ca099e546438,Namespace:calico-apiserver,Attempt:0,}" May 27 03:23:20.894494 systemd-networkd[1599]: cali4f2cc6035f4: Link UP May 27 03:23:20.896804 systemd-networkd[1599]: cali4f2cc6035f4: Gained carrier May 27 03:23:20.912454 containerd[1720]: 2025-05-27 03:23:20.809 [INFO][4449] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--98ca04e8ee-k8s-goldmane--78d55f7ddc--mlshg-eth0 goldmane-78d55f7ddc- calico-system 4d81e6cc-fb55-4862-b180-e72b2e08be0e 811 0 2025-05-27 03:22:57 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344.0.0-a-98ca04e8ee goldmane-78d55f7ddc-mlshg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4f2cc6035f4 [] [] }} ContainerID="acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845" Namespace="calico-system" Pod="goldmane-78d55f7ddc-mlshg" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-goldmane--78d55f7ddc--mlshg-" May 27 03:23:20.912454 containerd[1720]: 2025-05-27 03:23:20.809 [INFO][4449] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845" Namespace="calico-system" Pod="goldmane-78d55f7ddc-mlshg" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-goldmane--78d55f7ddc--mlshg-eth0" May 27 03:23:20.912454 containerd[1720]: 2025-05-27 03:23:20.857 [INFO][4489] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845" HandleID="k8s-pod-network.acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-goldmane--78d55f7ddc--mlshg-eth0" May 27 03:23:20.912660 containerd[1720]: 2025-05-27 03:23:20.857 [INFO][4489] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845" HandleID="k8s-pod-network.acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-goldmane--78d55f7ddc--mlshg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9980), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.0.0-a-98ca04e8ee", "pod":"goldmane-78d55f7ddc-mlshg", "timestamp":"2025-05-27 03:23:20.857558569 +0000 UTC"}, Hostname:"ci-4344.0.0-a-98ca04e8ee", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:23:20.912660 containerd[1720]: 2025-05-27 03:23:20.857 [INFO][4489] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:23:20.912660 containerd[1720]: 2025-05-27 03:23:20.857 [INFO][4489] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:23:20.912660 containerd[1720]: 2025-05-27 03:23:20.857 [INFO][4489] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-98ca04e8ee' May 27 03:23:20.912660 containerd[1720]: 2025-05-27 03:23:20.865 [INFO][4489] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:20.912660 containerd[1720]: 2025-05-27 03:23:20.869 [INFO][4489] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:20.912660 containerd[1720]: 2025-05-27 03:23:20.872 [INFO][4489] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:20.912660 containerd[1720]: 2025-05-27 03:23:20.874 [INFO][4489] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:20.912660 containerd[1720]: 2025-05-27 03:23:20.875 [INFO][4489] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:20.912891 containerd[1720]: 2025-05-27 03:23:20.875 [INFO][4489] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:20.912891 containerd[1720]: 2025-05-27 03:23:20.876 [INFO][4489] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845 May 27 03:23:20.912891 containerd[1720]: 2025-05-27 03:23:20.882 [INFO][4489] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:20.912891 containerd[1720]: 2025-05-27 03:23:20.886 [INFO][4489] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.12.130/26] block=192.168.12.128/26 handle="k8s-pod-network.acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:20.912891 containerd[1720]: 2025-05-27 03:23:20.886 [INFO][4489] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.130/26] handle="k8s-pod-network.acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:20.912891 containerd[1720]: 2025-05-27 03:23:20.886 [INFO][4489] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:23:20.912891 containerd[1720]: 2025-05-27 03:23:20.886 [INFO][4489] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.130/26] IPv6=[] ContainerID="acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845" HandleID="k8s-pod-network.acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-goldmane--78d55f7ddc--mlshg-eth0" May 27 03:23:20.913058 containerd[1720]: 2025-05-27 03:23:20.888 [INFO][4449] cni-plugin/k8s.go 418: Populated endpoint ContainerID="acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845" Namespace="calico-system" Pod="goldmane-78d55f7ddc-mlshg" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-goldmane--78d55f7ddc--mlshg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--98ca04e8ee-k8s-goldmane--78d55f7ddc--mlshg-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"4d81e6cc-fb55-4862-b180-e72b2e08be0e", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 22, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-98ca04e8ee", ContainerID:"", Pod:"goldmane-78d55f7ddc-mlshg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4f2cc6035f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:23:20.913058 containerd[1720]: 2025-05-27 03:23:20.888 [INFO][4449] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.130/32] ContainerID="acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845" Namespace="calico-system" Pod="goldmane-78d55f7ddc-mlshg" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-goldmane--78d55f7ddc--mlshg-eth0" May 27 03:23:20.913154 containerd[1720]: 2025-05-27 03:23:20.888 [INFO][4449] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f2cc6035f4 ContainerID="acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845" Namespace="calico-system" Pod="goldmane-78d55f7ddc-mlshg" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-goldmane--78d55f7ddc--mlshg-eth0" May 27 03:23:20.913154 containerd[1720]: 2025-05-27 03:23:20.898 [INFO][4449] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845" Namespace="calico-system" Pod="goldmane-78d55f7ddc-mlshg" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-goldmane--78d55f7ddc--mlshg-eth0" May 27 03:23:20.913206 containerd[1720]: 2025-05-27 03:23:20.898 [INFO][4449] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845" Namespace="calico-system" Pod="goldmane-78d55f7ddc-mlshg" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-goldmane--78d55f7ddc--mlshg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--98ca04e8ee-k8s-goldmane--78d55f7ddc--mlshg-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"4d81e6cc-fb55-4862-b180-e72b2e08be0e", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 22, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-98ca04e8ee", ContainerID:"acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845", Pod:"goldmane-78d55f7ddc-mlshg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4f2cc6035f4", MAC:"0e:9d:76:e7:d1:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:23:20.913266 containerd[1720]: 2025-05-27 03:23:20.909 [INFO][4449] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845" Namespace="calico-system" Pod="goldmane-78d55f7ddc-mlshg" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-goldmane--78d55f7ddc--mlshg-eth0" May 27 03:23:20.955910 containerd[1720]: time="2025-05-27T03:23:20.955770801Z" level=info msg="connecting to shim acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845" address="unix:///run/containerd/s/425f68a387c3da407dc43789c234e6e446c9b7f55a0a3cad95b16d21afc98121" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:20.979039 systemd[1]: Started cri-containerd-acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845.scope - libcontainer container acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845. May 27 03:23:21.010097 systemd-networkd[1599]: calic10e602f957: Link UP May 27 03:23:21.011450 systemd-networkd[1599]: calic10e602f957: Gained carrier May 27 03:23:21.030713 containerd[1720]: 2025-05-27 03:23:20.794 [INFO][4428] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--j8plw-eth0 calico-apiserver-f58557fbc- calico-apiserver 2ff286f0-6dcf-479f-800b-8f5670d2a81a 810 0 2025-05-27 03:22:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f58557fbc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.0.0-a-98ca04e8ee calico-apiserver-f58557fbc-j8plw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic10e602f957 [] [] }} ContainerID="a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de" Namespace="calico-apiserver" Pod="calico-apiserver-f58557fbc-j8plw" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--j8plw-" May 27 03:23:21.030713 containerd[1720]: 2025-05-27 03:23:20.794 [INFO][4428] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de" Namespace="calico-apiserver" Pod="calico-apiserver-f58557fbc-j8plw" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--j8plw-eth0" May 27 03:23:21.030713 containerd[1720]: 2025-05-27 03:23:20.859 [INFO][4478] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de" HandleID="k8s-pod-network.a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--j8plw-eth0" May 27 03:23:21.031230 containerd[1720]: 2025-05-27 03:23:20.859 [INFO][4478] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de" HandleID="k8s-pod-network.a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--j8plw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000233870), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.0.0-a-98ca04e8ee", "pod":"calico-apiserver-f58557fbc-j8plw", "timestamp":"2025-05-27 03:23:20.859156905 +0000 UTC"}, Hostname:"ci-4344.0.0-a-98ca04e8ee", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:23:21.031230 containerd[1720]: 2025-05-27 03:23:20.859 [INFO][4478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:23:21.031230 containerd[1720]: 2025-05-27 03:23:20.886 [INFO][4478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:23:21.031230 containerd[1720]: 2025-05-27 03:23:20.886 [INFO][4478] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-98ca04e8ee' May 27 03:23:21.031230 containerd[1720]: 2025-05-27 03:23:20.966 [INFO][4478] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.031230 containerd[1720]: 2025-05-27 03:23:20.972 [INFO][4478] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.031230 containerd[1720]: 2025-05-27 03:23:20.978 [INFO][4478] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.031230 containerd[1720]: 2025-05-27 03:23:20.981 [INFO][4478] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.031230 containerd[1720]: 2025-05-27 03:23:20.984 [INFO][4478] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.032107 containerd[1720]: 2025-05-27 03:23:20.984 [INFO][4478] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.032107 containerd[1720]: 2025-05-27 03:23:20.985 [INFO][4478] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de May 27 03:23:21.032107 containerd[1720]: 2025-05-27 03:23:20.990 [INFO][4478] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.032107 containerd[1720]: 2025-05-27 03:23:20.999 [INFO][4478] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.12.131/26] block=192.168.12.128/26 handle="k8s-pod-network.a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.032107 containerd[1720]: 2025-05-27 03:23:20.999 [INFO][4478] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.131/26] handle="k8s-pod-network.a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.032107 containerd[1720]: 2025-05-27 03:23:20.999 [INFO][4478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:23:21.032107 containerd[1720]: 2025-05-27 03:23:20.999 [INFO][4478] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.131/26] IPv6=[] ContainerID="a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de" HandleID="k8s-pod-network.a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--j8plw-eth0" May 27 03:23:21.032530 containerd[1720]: 2025-05-27 03:23:21.003 [INFO][4428] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de" Namespace="calico-apiserver" Pod="calico-apiserver-f58557fbc-j8plw" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--j8plw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--j8plw-eth0", GenerateName:"calico-apiserver-f58557fbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ff286f0-6dcf-479f-800b-8f5670d2a81a", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f58557fbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-98ca04e8ee", ContainerID:"", Pod:"calico-apiserver-f58557fbc-j8plw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic10e602f957", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:23:21.032787 containerd[1720]: 2025-05-27 03:23:21.003 [INFO][4428] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.131/32] ContainerID="a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de" Namespace="calico-apiserver" Pod="calico-apiserver-f58557fbc-j8plw" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--j8plw-eth0" May 27 03:23:21.032787 containerd[1720]: 2025-05-27 03:23:21.003 [INFO][4428] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic10e602f957 ContainerID="a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de" Namespace="calico-apiserver" Pod="calico-apiserver-f58557fbc-j8plw" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--j8plw-eth0" May 27 03:23:21.032787 containerd[1720]: 2025-05-27 03:23:21.010 [INFO][4428] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de" Namespace="calico-apiserver" Pod="calico-apiserver-f58557fbc-j8plw" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--j8plw-eth0" May 27 03:23:21.032846 containerd[1720]: 2025-05-27 03:23:21.011 [INFO][4428] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de" Namespace="calico-apiserver" Pod="calico-apiserver-f58557fbc-j8plw" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--j8plw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--j8plw-eth0", GenerateName:"calico-apiserver-f58557fbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ff286f0-6dcf-479f-800b-8f5670d2a81a", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f58557fbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-98ca04e8ee", ContainerID:"a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de", Pod:"calico-apiserver-f58557fbc-j8plw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic10e602f957", MAC:"d6:e7:5c:8e:62:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:23:21.033071 containerd[1720]: 2025-05-27 03:23:21.027 [INFO][4428] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de" Namespace="calico-apiserver" Pod="calico-apiserver-f58557fbc-j8plw" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--j8plw-eth0" May 27 03:23:21.044694 containerd[1720]: time="2025-05-27T03:23:21.043857101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-mlshg,Uid:4d81e6cc-fb55-4862-b180-e72b2e08be0e,Namespace:calico-system,Attempt:0,} returns sandbox id \"acca4dc825e4fa1d3bdb34497279ef07b8f714eadd580061838e9230c59ea845\"" May 27 03:23:21.046503 containerd[1720]: time="2025-05-27T03:23:21.046424454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:23:21.085301 containerd[1720]: time="2025-05-27T03:23:21.085274972Z" level=info msg="connecting to shim a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de" address="unix:///run/containerd/s/be4daabae533a6bf4124d26f8609c8a20d5cf24f3d6a180ca78e5a4201ea8eb0" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:21.102039 systemd[1]: Started cri-containerd-a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de.scope - libcontainer container a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de. May 27 03:23:21.105424 systemd-networkd[1599]: calif5eb3d1e871: Link UP May 27 03:23:21.105569 systemd-networkd[1599]: calif5eb3d1e871: Gained carrier May 27 03:23:21.124122 containerd[1720]: 2025-05-27 03:23:20.796 [INFO][4438] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--rqp8z-eth0 coredns-674b8bbfcf- kube-system 473c7f23-fd99-498d-a6c5-6681dfbec009 804 0 2025-05-27 03:22:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.0.0-a-98ca04e8ee coredns-674b8bbfcf-rqp8z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif5eb3d1e871 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqp8z" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--rqp8z-" May 27 03:23:21.124122 containerd[1720]: 2025-05-27 03:23:20.796 [INFO][4438] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqp8z" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--rqp8z-eth0" May 27 03:23:21.124122 containerd[1720]: 2025-05-27 03:23:20.861 [INFO][4480] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0" HandleID="k8s-pod-network.00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--rqp8z-eth0" May 27 03:23:21.124273 containerd[1720]: 2025-05-27 03:23:20.861 [INFO][4480] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0" HandleID="k8s-pod-network.00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--rqp8z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000233ac0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.0.0-a-98ca04e8ee", "pod":"coredns-674b8bbfcf-rqp8z", "timestamp":"2025-05-27 03:23:20.86162816 +0000 UTC"}, Hostname:"ci-4344.0.0-a-98ca04e8ee", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:23:21.124273 containerd[1720]: 2025-05-27 03:23:20.861 [INFO][4480] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:23:21.124273 containerd[1720]: 2025-05-27 03:23:20.999 [INFO][4480] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:23:21.124273 containerd[1720]: 2025-05-27 03:23:20.999 [INFO][4480] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-98ca04e8ee' May 27 03:23:21.124273 containerd[1720]: 2025-05-27 03:23:21.070 [INFO][4480] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.124273 containerd[1720]: 2025-05-27 03:23:21.074 [INFO][4480] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.124273 containerd[1720]: 2025-05-27 03:23:21.077 [INFO][4480] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.124273 containerd[1720]: 2025-05-27 03:23:21.080 [INFO][4480] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.124273 containerd[1720]: 2025-05-27 03:23:21.082 [INFO][4480] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.124464 containerd[1720]: 2025-05-27 03:23:21.082 [INFO][4480] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.124464 containerd[1720]: 2025-05-27 03:23:21.084 [INFO][4480] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0 May 27 03:23:21.124464 containerd[1720]: 2025-05-27 03:23:21.091 [INFO][4480] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.124464 containerd[1720]: 2025-05-27 03:23:21.099 [INFO][4480] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.12.132/26] block=192.168.12.128/26 handle="k8s-pod-network.00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.124464 containerd[1720]: 2025-05-27 03:23:21.099 [INFO][4480] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.132/26] handle="k8s-pod-network.00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.124464 containerd[1720]: 2025-05-27 03:23:21.100 [INFO][4480] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:23:21.124464 containerd[1720]: 2025-05-27 03:23:21.100 [INFO][4480] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.132/26] IPv6=[] ContainerID="00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0" HandleID="k8s-pod-network.00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--rqp8z-eth0" May 27 03:23:21.124602 containerd[1720]: 2025-05-27 03:23:21.102 [INFO][4438] cni-plugin/k8s.go 418: Populated endpoint ContainerID="00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqp8z" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--rqp8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--rqp8z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"473c7f23-fd99-498d-a6c5-6681dfbec009", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 22, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-98ca04e8ee", ContainerID:"", Pod:"coredns-674b8bbfcf-rqp8z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5eb3d1e871", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:23:21.124602 containerd[1720]: 2025-05-27 03:23:21.102 [INFO][4438] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.132/32] ContainerID="00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqp8z" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--rqp8z-eth0" May 27 03:23:21.124602 containerd[1720]: 2025-05-27 03:23:21.102 [INFO][4438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5eb3d1e871 ContainerID="00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqp8z" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--rqp8z-eth0" May 27 03:23:21.124602 containerd[1720]: 2025-05-27 03:23:21.105 [INFO][4438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqp8z" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--rqp8z-eth0" May 27 03:23:21.124602 containerd[1720]: 2025-05-27 03:23:21.106 [INFO][4438] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqp8z" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--rqp8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--rqp8z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"473c7f23-fd99-498d-a6c5-6681dfbec009", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 22, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-98ca04e8ee", ContainerID:"00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0", Pod:"coredns-674b8bbfcf-rqp8z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5eb3d1e871", MAC:"f6:54:d8:53:5a:19", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:23:21.124602 containerd[1720]: 2025-05-27 03:23:21.120 [INFO][4438] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqp8z" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--rqp8z-eth0" May 27 03:23:21.163740 containerd[1720]: time="2025-05-27T03:23:21.163715843Z" level=info msg="connecting to shim 00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0" address="unix:///run/containerd/s/55e52386a0305dd42b258d3fbe397f291adbd377c90061a4b3c30bcc1839f71d" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:21.172231 containerd[1720]: time="2025-05-27T03:23:21.172202766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f58557fbc-j8plw,Uid:2ff286f0-6dcf-479f-800b-8f5670d2a81a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de\"" May 27 03:23:21.186097 systemd[1]: Started cri-containerd-00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0.scope - libcontainer container 00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0. May 27 03:23:21.207520 systemd-networkd[1599]: caliefec6b1b0bb: Link UP May 27 03:23:21.208502 systemd-networkd[1599]: caliefec6b1b0bb: Gained carrier May 27 03:23:21.223258 containerd[1720]: time="2025-05-27T03:23:21.223218506Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:23:21.226086 containerd[1720]: time="2025-05-27T03:23:21.225858627Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:23:21.226185 containerd[1720]: time="2025-05-27T03:23:21.225939286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:23:21.226488 kubelet[3072]: E0527 03:23:21.226454 3072 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:23:21.227547 kubelet[3072]: E0527 03:23:21.226863 3072 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:23:21.227547 kubelet[3072]: E0527 03:23:21.227140 3072 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mn4c2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-mlshg_calico-system(4d81e6cc-fb55-4862-b180-e72b2e08be0e): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:23:21.227724 containerd[1720]: time="2025-05-27T03:23:21.227310311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 03:23:21.228855 kubelet[3072]: E0527 03:23:21.228509 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-mlshg" podUID="4d81e6cc-fb55-4862-b180-e72b2e08be0e" May 27 03:23:21.232920 containerd[1720]: 2025-05-27 03:23:20.828 [INFO][4458] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--zdhkv-eth0 calico-apiserver-f58557fbc- calico-apiserver dfd983a2-0670-40aa-aa20-ca099e546438 808 0 2025-05-27 03:22:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f58557fbc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.0.0-a-98ca04e8ee calico-apiserver-f58557fbc-zdhkv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliefec6b1b0bb [] [] }} ContainerID="aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef" Namespace="calico-apiserver" Pod="calico-apiserver-f58557fbc-zdhkv" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--zdhkv-" May 27 03:23:21.232920 containerd[1720]: 2025-05-27 03:23:20.828 [INFO][4458] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef" Namespace="calico-apiserver" Pod="calico-apiserver-f58557fbc-zdhkv" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--zdhkv-eth0" May 27 03:23:21.232920 containerd[1720]: 2025-05-27 03:23:20.869 [INFO][4495] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef" HandleID="k8s-pod-network.aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--zdhkv-eth0" May 27 03:23:21.232920 containerd[1720]: 2025-05-27 03:23:20.870 [INFO][4495] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef" HandleID="k8s-pod-network.aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--zdhkv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d98d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.0.0-a-98ca04e8ee", "pod":"calico-apiserver-f58557fbc-zdhkv", "timestamp":"2025-05-27 03:23:20.869987804 +0000 UTC"}, Hostname:"ci-4344.0.0-a-98ca04e8ee", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:23:21.232920 containerd[1720]: 2025-05-27 03:23:20.870 [INFO][4495] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:23:21.232920 containerd[1720]: 2025-05-27 03:23:21.100 [INFO][4495] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:23:21.232920 containerd[1720]: 2025-05-27 03:23:21.100 [INFO][4495] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-98ca04e8ee' May 27 03:23:21.232920 containerd[1720]: 2025-05-27 03:23:21.166 [INFO][4495] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.232920 containerd[1720]: 2025-05-27 03:23:21.176 [INFO][4495] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.232920 containerd[1720]: 2025-05-27 03:23:21.181 [INFO][4495] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.232920 containerd[1720]: 2025-05-27 03:23:21.182 [INFO][4495] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.232920 containerd[1720]: 2025-05-27 03:23:21.184 [INFO][4495] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.232920 containerd[1720]: 2025-05-27 03:23:21.184 [INFO][4495] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.232920 containerd[1720]: 2025-05-27 03:23:21.185 [INFO][4495] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef May 27 03:23:21.232920 containerd[1720]: 2025-05-27 03:23:21.191 [INFO][4495] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.232920 containerd[1720]: 2025-05-27 03:23:21.200 [INFO][4495] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.12.133/26] block=192.168.12.128/26 handle="k8s-pod-network.aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.232920 containerd[1720]: 2025-05-27 03:23:21.200 [INFO][4495] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.133/26] handle="k8s-pod-network.aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.232920 containerd[1720]: 2025-05-27 03:23:21.200 [INFO][4495] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:23:21.232920 containerd[1720]: 2025-05-27 03:23:21.200 [INFO][4495] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.133/26] IPv6=[] ContainerID="aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef" HandleID="k8s-pod-network.aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--zdhkv-eth0" May 27 03:23:21.233494 containerd[1720]: 2025-05-27 03:23:21.203 [INFO][4458] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef" Namespace="calico-apiserver" Pod="calico-apiserver-f58557fbc-zdhkv" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--zdhkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--zdhkv-eth0", GenerateName:"calico-apiserver-f58557fbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"dfd983a2-0670-40aa-aa20-ca099e546438", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f58557fbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-98ca04e8ee", ContainerID:"", Pod:"calico-apiserver-f58557fbc-zdhkv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliefec6b1b0bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:23:21.233494 containerd[1720]: 2025-05-27 03:23:21.203 [INFO][4458] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.133/32] ContainerID="aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef" Namespace="calico-apiserver" Pod="calico-apiserver-f58557fbc-zdhkv" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--zdhkv-eth0" May 27 03:23:21.233494 containerd[1720]: 2025-05-27 03:23:21.203 [INFO][4458] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliefec6b1b0bb ContainerID="aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef" Namespace="calico-apiserver" Pod="calico-apiserver-f58557fbc-zdhkv" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--zdhkv-eth0" May 27 03:23:21.233494 containerd[1720]: 2025-05-27 03:23:21.207 [INFO][4458] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef" Namespace="calico-apiserver" Pod="calico-apiserver-f58557fbc-zdhkv" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--zdhkv-eth0" May 27 03:23:21.233494 containerd[1720]: 2025-05-27 03:23:21.210 [INFO][4458] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef" Namespace="calico-apiserver" Pod="calico-apiserver-f58557fbc-zdhkv" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--zdhkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--zdhkv-eth0", GenerateName:"calico-apiserver-f58557fbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"dfd983a2-0670-40aa-aa20-ca099e546438", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f58557fbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-98ca04e8ee", ContainerID:"aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef", Pod:"calico-apiserver-f58557fbc-zdhkv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliefec6b1b0bb", MAC:"72:f6:89:08:30:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:23:21.233494 containerd[1720]: 2025-05-27 03:23:21.223 [INFO][4458] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef" Namespace="calico-apiserver" Pod="calico-apiserver-f58557fbc-zdhkv" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--apiserver--f58557fbc--zdhkv-eth0" May 27 03:23:21.257186 containerd[1720]: time="2025-05-27T03:23:21.257135816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rqp8z,Uid:473c7f23-fd99-498d-a6c5-6681dfbec009,Namespace:kube-system,Attempt:0,} returns sandbox id \"00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0\"" May 27 03:23:21.267760 containerd[1720]: time="2025-05-27T03:23:21.267742396Z" level=info msg="CreateContainer within sandbox \"00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:23:21.288850 containerd[1720]: time="2025-05-27T03:23:21.288807902Z" level=info msg="connecting to shim aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef" address="unix:///run/containerd/s/31366a5cc3b95808b0e7bac5bbabae465959f7ed96aee5d80ce905e621899fda" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:21.290364 containerd[1720]: time="2025-05-27T03:23:21.290296752Z" level=info msg="Container d776a102a4a39b92b65721e164d9c6fd3f2cdab84b133e3c0f2bd182c741c360: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:21.301420 containerd[1720]: time="2025-05-27T03:23:21.301198450Z" level=info msg="CreateContainer within sandbox \"00feebc0208b5f71c4dfae2f3e4fa5a5254d981206d2fa6b715bf24640443dd0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d776a102a4a39b92b65721e164d9c6fd3f2cdab84b133e3c0f2bd182c741c360\"" May 27 03:23:21.301632 containerd[1720]: time="2025-05-27T03:23:21.301615483Z" level=info msg="StartContainer for \"d776a102a4a39b92b65721e164d9c6fd3f2cdab84b133e3c0f2bd182c741c360\"" May 27 03:23:21.302337 containerd[1720]: time="2025-05-27T03:23:21.302311005Z" level=info msg="connecting to shim d776a102a4a39b92b65721e164d9c6fd3f2cdab84b133e3c0f2bd182c741c360" address="unix:///run/containerd/s/55e52386a0305dd42b258d3fbe397f291adbd377c90061a4b3c30bcc1839f71d" protocol=ttrpc version=3 May 27 03:23:21.307046 systemd[1]: Started cri-containerd-aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef.scope - libcontainer container aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef. May 27 03:23:21.316912 systemd[1]: Started cri-containerd-d776a102a4a39b92b65721e164d9c6fd3f2cdab84b133e3c0f2bd182c741c360.scope - libcontainer container d776a102a4a39b92b65721e164d9c6fd3f2cdab84b133e3c0f2bd182c741c360. May 27 03:23:21.344107 containerd[1720]: time="2025-05-27T03:23:21.344088484Z" level=info msg="StartContainer for \"d776a102a4a39b92b65721e164d9c6fd3f2cdab84b133e3c0f2bd182c741c360\" returns successfully" May 27 03:23:21.363636 containerd[1720]: time="2025-05-27T03:23:21.363612922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f58557fbc-zdhkv,Uid:dfd983a2-0670-40aa-aa20-ca099e546438,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef\"" May 27 03:23:21.626093 kubelet[3072]: I0527 03:23:21.626049 3072 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:23:21.687154 containerd[1720]: time="2025-05-27T03:23:21.687112381Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99a9aebb958b59ce220b6a8c39e24a46322b7207e04a6237da90672aa9320dd1\" id:\"ce6b2f92c7b935295db22fdc5e58bfd7ac1fcb32ed53ae317d9523e5ea9d4159\" pid:4769 exited_at:{seconds:1748316201 nanos:686669929}" May 27 03:23:21.714891 containerd[1720]: time="2025-05-27T03:23:21.714218130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pft9h,Uid:aaa94a96-399c-4345-92d5-d811c3bc141a,Namespace:calico-system,Attempt:0,}" May 27 03:23:21.789123 containerd[1720]: time="2025-05-27T03:23:21.789098902Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99a9aebb958b59ce220b6a8c39e24a46322b7207e04a6237da90672aa9320dd1\" id:\"659c19ffabe9bce6154520a317351e2c4876c484002e039f2ea8d8443b73edd0\" pid:4794 exited_at:{seconds:1748316201 nanos:788894406}" May 27 03:23:21.827418 systemd-networkd[1599]: cali8ce5cad8cc4: Link UP May 27 03:23:21.828595 systemd-networkd[1599]: cali8ce5cad8cc4: Gained carrier May 27 03:23:21.843657 kubelet[3072]: E0527 03:23:21.843269 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-mlshg" podUID="4d81e6cc-fb55-4862-b180-e72b2e08be0e" May 27 03:23:21.846495 containerd[1720]: 2025-05-27 03:23:21.769 [INFO][4804] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--98ca04e8ee-k8s-csi--node--driver--pft9h-eth0 csi-node-driver- calico-system aaa94a96-399c-4345-92d5-d811c3bc141a 696 0 2025-05-27 03:22:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344.0.0-a-98ca04e8ee csi-node-driver-pft9h eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8ce5cad8cc4 [] [] }} ContainerID="f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f" Namespace="calico-system" Pod="csi-node-driver-pft9h" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-csi--node--driver--pft9h-" May 27 03:23:21.846495 containerd[1720]: 2025-05-27 03:23:21.769 [INFO][4804] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f" Namespace="calico-system" Pod="csi-node-driver-pft9h" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-csi--node--driver--pft9h-eth0" May 27 03:23:21.846495 containerd[1720]: 2025-05-27 03:23:21.793 [INFO][4817] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f" HandleID="k8s-pod-network.f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-csi--node--driver--pft9h-eth0" May 27 03:23:21.846495 containerd[1720]: 2025-05-27 03:23:21.793 [INFO][4817] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f" HandleID="k8s-pod-network.f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-csi--node--driver--pft9h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.0.0-a-98ca04e8ee", "pod":"csi-node-driver-pft9h", "timestamp":"2025-05-27 03:23:21.793308623 +0000 UTC"}, Hostname:"ci-4344.0.0-a-98ca04e8ee", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:23:21.846495 containerd[1720]: 2025-05-27 03:23:21.793 [INFO][4817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:23:21.846495 containerd[1720]: 2025-05-27 03:23:21.793 [INFO][4817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:23:21.846495 containerd[1720]: 2025-05-27 03:23:21.793 [INFO][4817] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-98ca04e8ee' May 27 03:23:21.846495 containerd[1720]: 2025-05-27 03:23:21.797 [INFO][4817] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.846495 containerd[1720]: 2025-05-27 03:23:21.801 [INFO][4817] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.846495 containerd[1720]: 2025-05-27 03:23:21.805 [INFO][4817] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.846495 containerd[1720]: 2025-05-27 03:23:21.806 [INFO][4817] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.846495 containerd[1720]: 2025-05-27 03:23:21.808 [INFO][4817] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.846495 containerd[1720]: 2025-05-27 03:23:21.808 [INFO][4817] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.846495 containerd[1720]: 2025-05-27 03:23:21.809 [INFO][4817] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f May 27 03:23:21.846495 containerd[1720]: 2025-05-27 03:23:21.814 [INFO][4817] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.846495 containerd[1720]: 2025-05-27 03:23:21.821 [INFO][4817] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.12.134/26] block=192.168.12.128/26 handle="k8s-pod-network.f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.846495 containerd[1720]: 2025-05-27 03:23:21.821 [INFO][4817] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.134/26] handle="k8s-pod-network.f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:21.846495 containerd[1720]: 2025-05-27 03:23:21.821 [INFO][4817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:23:21.846495 containerd[1720]: 2025-05-27 03:23:21.821 [INFO][4817] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.134/26] IPv6=[] ContainerID="f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f" HandleID="k8s-pod-network.f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-csi--node--driver--pft9h-eth0" May 27 03:23:21.848226 containerd[1720]: 2025-05-27 03:23:21.824 [INFO][4804] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f" Namespace="calico-system" Pod="csi-node-driver-pft9h" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-csi--node--driver--pft9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--98ca04e8ee-k8s-csi--node--driver--pft9h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aaa94a96-399c-4345-92d5-d811c3bc141a", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 22, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-98ca04e8ee", ContainerID:"", Pod:"csi-node-driver-pft9h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ce5cad8cc4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:23:21.848226 containerd[1720]: 2025-05-27 03:23:21.824 [INFO][4804] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.134/32] ContainerID="f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f" Namespace="calico-system" Pod="csi-node-driver-pft9h" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-csi--node--driver--pft9h-eth0" May 27 03:23:21.848226 containerd[1720]: 2025-05-27 03:23:21.824 [INFO][4804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ce5cad8cc4 ContainerID="f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f" Namespace="calico-system" Pod="csi-node-driver-pft9h" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-csi--node--driver--pft9h-eth0" May 27 03:23:21.848226 containerd[1720]: 2025-05-27 03:23:21.828 [INFO][4804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f" Namespace="calico-system" Pod="csi-node-driver-pft9h" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-csi--node--driver--pft9h-eth0" May 27 03:23:21.848226 containerd[1720]: 2025-05-27 03:23:21.828 [INFO][4804] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f" Namespace="calico-system" Pod="csi-node-driver-pft9h" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-csi--node--driver--pft9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--98ca04e8ee-k8s-csi--node--driver--pft9h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aaa94a96-399c-4345-92d5-d811c3bc141a", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 22, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-98ca04e8ee", ContainerID:"f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f", Pod:"csi-node-driver-pft9h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ce5cad8cc4", MAC:"be:9b:af:07:59:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:23:21.848226 containerd[1720]: 2025-05-27 03:23:21.842 [INFO][4804] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f" Namespace="calico-system" Pod="csi-node-driver-pft9h" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-csi--node--driver--pft9h-eth0" May 27 03:23:21.883564 kubelet[3072]: I0527 03:23:21.883455 3072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rqp8z" podStartSLOduration=35.883440101 podStartE2EDuration="35.883440101s" podCreationTimestamp="2025-05-27 03:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:23:21.858757315 +0000 UTC m=+42.227669068" watchObservedRunningTime="2025-05-27 03:23:21.883440101 +0000 UTC m=+42.252351855" May 27 03:23:21.892256 containerd[1720]: time="2025-05-27T03:23:21.892119666Z" level=info msg="connecting to shim f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f" address="unix:///run/containerd/s/879185071c2f539f741de6f1abea91b1d14c5c885ebfe77dac301ecec18b5d0e" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:21.931015 systemd[1]: Started cri-containerd-f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f.scope - libcontainer container f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f. May 27 03:23:21.957254 containerd[1720]: time="2025-05-27T03:23:21.957218320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pft9h,Uid:aaa94a96-399c-4345-92d5-d811c3bc141a,Namespace:calico-system,Attempt:0,} returns sandbox id \"f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f\"" May 27 03:23:22.175036 systemd-networkd[1599]: cali4f2cc6035f4: Gained IPv6LL May 27 03:23:22.366059 systemd-networkd[1599]: caliefec6b1b0bb: Gained IPv6LL May 27 03:23:22.430083 systemd-networkd[1599]: calic10e602f957: Gained IPv6LL May 27 03:23:22.713163 containerd[1720]: time="2025-05-27T03:23:22.713021249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w998w,Uid:bfdbd044-128d-4e08-85a0-8f80049c1230,Namespace:kube-system,Attempt:0,}" May 27 03:23:22.863389 kubelet[3072]: E0527 03:23:22.862985 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-mlshg" podUID="4d81e6cc-fb55-4862-b180-e72b2e08be0e" May 27 03:23:22.899056 systemd-networkd[1599]: cali40e01ba1931: Link UP May 27 03:23:22.899325 systemd-networkd[1599]: cali40e01ba1931: Gained carrier May 27 03:23:22.932508 containerd[1720]: 2025-05-27 03:23:22.760 [INFO][4886] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--w998w-eth0 coredns-674b8bbfcf- kube-system bfdbd044-128d-4e08-85a0-8f80049c1230 805 0 2025-05-27 03:22:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.0.0-a-98ca04e8ee coredns-674b8bbfcf-w998w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali40e01ba1931 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-w998w" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--w998w-" May 27 03:23:22.932508 containerd[1720]: 2025-05-27 03:23:22.760 [INFO][4886] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-w998w" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--w998w-eth0" May 27 03:23:22.932508 containerd[1720]: 2025-05-27 03:23:22.801 [INFO][4897] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2" HandleID="k8s-pod-network.5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--w998w-eth0" May 27 03:23:22.932508 containerd[1720]: 2025-05-27 03:23:22.801 [INFO][4897] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2" HandleID="k8s-pod-network.5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--w998w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9920), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.0.0-a-98ca04e8ee", "pod":"coredns-674b8bbfcf-w998w", "timestamp":"2025-05-27 03:23:22.80158417 +0000 UTC"}, Hostname:"ci-4344.0.0-a-98ca04e8ee", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:23:22.932508 containerd[1720]: 2025-05-27 03:23:22.801 [INFO][4897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:23:22.932508 containerd[1720]: 2025-05-27 03:23:22.801 [INFO][4897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:23:22.932508 containerd[1720]: 2025-05-27 03:23:22.802 [INFO][4897] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-98ca04e8ee' May 27 03:23:22.932508 containerd[1720]: 2025-05-27 03:23:22.810 [INFO][4897] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:22.932508 containerd[1720]: 2025-05-27 03:23:22.830 [INFO][4897] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:22.932508 containerd[1720]: 2025-05-27 03:23:22.839 [INFO][4897] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:22.932508 containerd[1720]: 2025-05-27 03:23:22.842 [INFO][4897] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:22.932508 containerd[1720]: 2025-05-27 03:23:22.846 [INFO][4897] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:22.932508 containerd[1720]: 2025-05-27 03:23:22.846 [INFO][4897] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:22.932508 containerd[1720]: 2025-05-27 03:23:22.847 [INFO][4897] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2 May 27 03:23:22.932508 containerd[1720]: 2025-05-27 03:23:22.866 [INFO][4897] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:22.932508 containerd[1720]: 2025-05-27 03:23:22.889 [INFO][4897] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.12.135/26] block=192.168.12.128/26 handle="k8s-pod-network.5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:22.932508 containerd[1720]: 2025-05-27 03:23:22.890 [INFO][4897] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.135/26] handle="k8s-pod-network.5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:22.932508 containerd[1720]: 2025-05-27 03:23:22.890 [INFO][4897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:23:22.932508 containerd[1720]: 2025-05-27 03:23:22.890 [INFO][4897] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.135/26] IPv6=[] ContainerID="5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2" HandleID="k8s-pod-network.5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--w998w-eth0" May 27 03:23:22.937214 containerd[1720]: 2025-05-27 03:23:22.893 [INFO][4886] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-w998w" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--w998w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--w998w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bfdbd044-128d-4e08-85a0-8f80049c1230", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 22, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-98ca04e8ee", ContainerID:"", Pod:"coredns-674b8bbfcf-w998w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40e01ba1931", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:23:22.937214 containerd[1720]: 2025-05-27 03:23:22.893 [INFO][4886] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.135/32] ContainerID="5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-w998w" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--w998w-eth0" May 27 03:23:22.937214 containerd[1720]: 2025-05-27 03:23:22.893 [INFO][4886] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40e01ba1931 ContainerID="5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-w998w" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--w998w-eth0" May 27 03:23:22.937214 containerd[1720]: 2025-05-27 03:23:22.899 [INFO][4886] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-w998w" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--w998w-eth0" May 27 03:23:22.937214 containerd[1720]: 2025-05-27 03:23:22.902 [INFO][4886] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-w998w" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--w998w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--w998w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bfdbd044-128d-4e08-85a0-8f80049c1230", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 22, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-98ca04e8ee", ContainerID:"5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2", Pod:"coredns-674b8bbfcf-w998w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40e01ba1931", MAC:"76:d4:01:e7:cd:33", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:23:22.937214 containerd[1720]: 2025-05-27 03:23:22.925 [INFO][4886] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-w998w" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-coredns--674b8bbfcf--w998w-eth0" May 27 03:23:22.943939 systemd-networkd[1599]: calif5eb3d1e871: Gained IPv6LL May 27 03:23:22.991096 containerd[1720]: time="2025-05-27T03:23:22.991005405Z" level=info msg="connecting to shim 5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2" address="unix:///run/containerd/s/7ada22a0a91e16412399ccd80bede4e4f3fada3605dd025c20f3f83a1b95a975" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:23.034206 systemd[1]: Started cri-containerd-5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2.scope - libcontainer container 5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2. May 27 03:23:23.091098 containerd[1720]: time="2025-05-27T03:23:23.091019021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w998w,Uid:bfdbd044-128d-4e08-85a0-8f80049c1230,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2\"" May 27 03:23:23.101018 containerd[1720]: time="2025-05-27T03:23:23.100697554Z" level=info msg="CreateContainer within sandbox \"5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:23:23.120290 containerd[1720]: time="2025-05-27T03:23:23.120268496Z" level=info msg="Container 1c8f6e1487983270fd9b35ad70c22c220b05b14ab6547aafa215be8cd53e3b7f: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:23.122250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1936962994.mount: Deactivated successfully. May 27 03:23:23.133603 containerd[1720]: time="2025-05-27T03:23:23.133535229Z" level=info msg="CreateContainer within sandbox \"5d82f4b9b15a08d693482d54f19b9d13c6d801510d3130f0d45fee34a37f90a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c8f6e1487983270fd9b35ad70c22c220b05b14ab6547aafa215be8cd53e3b7f\"" May 27 03:23:23.134510 containerd[1720]: time="2025-05-27T03:23:23.134460980Z" level=info msg="StartContainer for \"1c8f6e1487983270fd9b35ad70c22c220b05b14ab6547aafa215be8cd53e3b7f\"" May 27 03:23:23.135858 containerd[1720]: time="2025-05-27T03:23:23.135806204Z" level=info msg="connecting to shim 1c8f6e1487983270fd9b35ad70c22c220b05b14ab6547aafa215be8cd53e3b7f" address="unix:///run/containerd/s/7ada22a0a91e16412399ccd80bede4e4f3fada3605dd025c20f3f83a1b95a975" protocol=ttrpc version=3 May 27 03:23:23.157183 systemd[1]: Started cri-containerd-1c8f6e1487983270fd9b35ad70c22c220b05b14ab6547aafa215be8cd53e3b7f.scope - libcontainer container 1c8f6e1487983270fd9b35ad70c22c220b05b14ab6547aafa215be8cd53e3b7f. May 27 03:23:23.198096 containerd[1720]: time="2025-05-27T03:23:23.198074910Z" level=info msg="StartContainer for \"1c8f6e1487983270fd9b35ad70c22c220b05b14ab6547aafa215be8cd53e3b7f\" returns successfully" May 27 03:23:23.582118 systemd-networkd[1599]: cali8ce5cad8cc4: Gained IPv6LL May 27 03:23:23.717652 containerd[1720]: time="2025-05-27T03:23:23.717384564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8994977dd-fcvt7,Uid:e3f74c0c-4c3c-46fc-b841-26a126161c3b,Namespace:calico-system,Attempt:0,}" May 27 03:23:23.910032 kubelet[3072]: I0527 03:23:23.909828 3072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-w998w" podStartSLOduration=37.909808631 podStartE2EDuration="37.909808631s" podCreationTimestamp="2025-05-27 03:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:23:23.891644622 +0000 UTC m=+44.260556379" watchObservedRunningTime="2025-05-27 03:23:23.909808631 +0000 UTC m=+44.278720389" May 27 03:23:23.964407 systemd-networkd[1599]: calia860cb15814: Link UP May 27 03:23:23.967243 systemd-networkd[1599]: calia860cb15814: Gained carrier May 27 03:23:23.999164 containerd[1720]: 2025-05-27 03:23:23.795 [INFO][4998] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--98ca04e8ee-k8s-calico--kube--controllers--8994977dd--fcvt7-eth0 calico-kube-controllers-8994977dd- calico-system e3f74c0c-4c3c-46fc-b841-26a126161c3b 807 0 2025-05-27 03:22:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8994977dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344.0.0-a-98ca04e8ee calico-kube-controllers-8994977dd-fcvt7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia860cb15814 [] [] }} ContainerID="ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5" Namespace="calico-system" Pod="calico-kube-controllers-8994977dd-fcvt7" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--kube--controllers--8994977dd--fcvt7-" May 27 03:23:23.999164 containerd[1720]: 2025-05-27 03:23:23.795 [INFO][4998] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5" Namespace="calico-system" Pod="calico-kube-controllers-8994977dd-fcvt7" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--kube--controllers--8994977dd--fcvt7-eth0" May 27 03:23:23.999164 containerd[1720]: 2025-05-27 03:23:23.868 [INFO][5010] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5" HandleID="k8s-pod-network.ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-calico--kube--controllers--8994977dd--fcvt7-eth0" May 27 03:23:23.999164 containerd[1720]: 2025-05-27 03:23:23.871 [INFO][5010] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5" HandleID="k8s-pod-network.ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-calico--kube--controllers--8994977dd--fcvt7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9b80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.0.0-a-98ca04e8ee", "pod":"calico-kube-controllers-8994977dd-fcvt7", "timestamp":"2025-05-27 03:23:23.866861324 +0000 UTC"}, Hostname:"ci-4344.0.0-a-98ca04e8ee", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:23:23.999164 containerd[1720]: 2025-05-27 03:23:23.871 [INFO][5010] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:23:23.999164 containerd[1720]: 2025-05-27 03:23:23.871 [INFO][5010] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:23:23.999164 containerd[1720]: 2025-05-27 03:23:23.871 [INFO][5010] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-98ca04e8ee' May 27 03:23:23.999164 containerd[1720]: 2025-05-27 03:23:23.889 [INFO][5010] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:23.999164 containerd[1720]: 2025-05-27 03:23:23.910 [INFO][5010] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:23.999164 containerd[1720]: 2025-05-27 03:23:23.930 [INFO][5010] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:23.999164 containerd[1720]: 2025-05-27 03:23:23.932 [INFO][5010] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:23.999164 containerd[1720]: 2025-05-27 03:23:23.939 [INFO][5010] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:23.999164 containerd[1720]: 2025-05-27 03:23:23.939 [INFO][5010] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:23.999164 containerd[1720]: 2025-05-27 03:23:23.942 [INFO][5010] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5 May 27 03:23:23.999164 containerd[1720]: 2025-05-27 03:23:23.947 [INFO][5010] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:23.999164 containerd[1720]: 2025-05-27 03:23:23.957 [INFO][5010] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.12.136/26] block=192.168.12.128/26 handle="k8s-pod-network.ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:23.999164 containerd[1720]: 2025-05-27 03:23:23.957 [INFO][5010] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.136/26] handle="k8s-pod-network.ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5" host="ci-4344.0.0-a-98ca04e8ee" May 27 03:23:23.999164 containerd[1720]: 2025-05-27 03:23:23.957 [INFO][5010] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:23:23.999164 containerd[1720]: 2025-05-27 03:23:23.957 [INFO][5010] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.136/26] IPv6=[] ContainerID="ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5" HandleID="k8s-pod-network.ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5" Workload="ci--4344.0.0--a--98ca04e8ee-k8s-calico--kube--controllers--8994977dd--fcvt7-eth0" May 27 03:23:24.003426 containerd[1720]: 2025-05-27 03:23:23.960 [INFO][4998] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5" Namespace="calico-system" Pod="calico-kube-controllers-8994977dd-fcvt7" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--kube--controllers--8994977dd--fcvt7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--98ca04e8ee-k8s-calico--kube--controllers--8994977dd--fcvt7-eth0", GenerateName:"calico-kube-controllers-8994977dd-", Namespace:"calico-system", SelfLink:"", UID:"e3f74c0c-4c3c-46fc-b841-26a126161c3b", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 22, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8994977dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-98ca04e8ee", ContainerID:"", Pod:"calico-kube-controllers-8994977dd-fcvt7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia860cb15814", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:23:24.003426 containerd[1720]: 2025-05-27 03:23:23.961 [INFO][4998] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.136/32] ContainerID="ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5" Namespace="calico-system" Pod="calico-kube-controllers-8994977dd-fcvt7" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--kube--controllers--8994977dd--fcvt7-eth0" May 27 03:23:24.003426 containerd[1720]: 2025-05-27 03:23:23.961 [INFO][4998] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia860cb15814 ContainerID="ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5" Namespace="calico-system" Pod="calico-kube-controllers-8994977dd-fcvt7" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--kube--controllers--8994977dd--fcvt7-eth0" May 27 03:23:24.003426 containerd[1720]: 2025-05-27 03:23:23.971 [INFO][4998] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5" Namespace="calico-system" Pod="calico-kube-controllers-8994977dd-fcvt7" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--kube--controllers--8994977dd--fcvt7-eth0" May 27 03:23:24.003426 containerd[1720]: 2025-05-27 03:23:23.972 [INFO][4998] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5" Namespace="calico-system" Pod="calico-kube-controllers-8994977dd-fcvt7" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--kube--controllers--8994977dd--fcvt7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--98ca04e8ee-k8s-calico--kube--controllers--8994977dd--fcvt7-eth0", GenerateName:"calico-kube-controllers-8994977dd-", Namespace:"calico-system", SelfLink:"", UID:"e3f74c0c-4c3c-46fc-b841-26a126161c3b", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 22, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8994977dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-98ca04e8ee", ContainerID:"ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5", Pod:"calico-kube-controllers-8994977dd-fcvt7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia860cb15814", MAC:"ce:0d:e7:a0:24:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:23:24.003426 containerd[1720]: 2025-05-27 03:23:23.996 [INFO][4998] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5" Namespace="calico-system" Pod="calico-kube-controllers-8994977dd-fcvt7" WorkloadEndpoint="ci--4344.0.0--a--98ca04e8ee-k8s-calico--kube--controllers--8994977dd--fcvt7-eth0" May 27 03:23:24.064626 containerd[1720]: time="2025-05-27T03:23:24.064563545Z" level=info msg="connecting to shim ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5" address="unix:///run/containerd/s/18ece9869af1a9eb12ca1c61c678cb36e3c146ce68da0b0d2622aaf549210ff2" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:24.112115 systemd[1]: Started cri-containerd-ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5.scope - libcontainer container ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5. May 27 03:23:24.399609 containerd[1720]: time="2025-05-27T03:23:24.399569976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8994977dd-fcvt7,Uid:e3f74c0c-4c3c-46fc-b841-26a126161c3b,Namespace:calico-system,Attempt:0,} returns sandbox id \"ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5\"" May 27 03:23:24.560830 containerd[1720]: time="2025-05-27T03:23:24.560164482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:24.562364 containerd[1720]: time="2025-05-27T03:23:24.562339286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 27 03:23:24.564507 containerd[1720]: time="2025-05-27T03:23:24.564480617Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:24.568583 containerd[1720]: time="2025-05-27T03:23:24.568554558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:24.568939 containerd[1720]: time="2025-05-27T03:23:24.568920074Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.341584903s" May 27 03:23:24.568982 containerd[1720]: time="2025-05-27T03:23:24.568950746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 03:23:24.569928 containerd[1720]: time="2025-05-27T03:23:24.569911570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 03:23:24.576480 containerd[1720]: time="2025-05-27T03:23:24.576458726Z" level=info msg="CreateContainer within sandbox \"a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:23:24.595824 containerd[1720]: time="2025-05-27T03:23:24.595777580Z" level=info msg="Container 8ebe354a137311359eacb9cbd3ab3f8ee2084670ee6f2c6549b1e9378c20f356: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:24.604826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1321817635.mount: Deactivated successfully. May 27 03:23:24.606125 systemd-networkd[1599]: cali40e01ba1931: Gained IPv6LL May 27 03:23:24.614166 containerd[1720]: time="2025-05-27T03:23:24.614100865Z" level=info msg="CreateContainer within sandbox \"a3c383b4827274fe12ae968d662c21ef665610895f4161c9fb93dbcd33cbe5de\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8ebe354a137311359eacb9cbd3ab3f8ee2084670ee6f2c6549b1e9378c20f356\"" May 27 03:23:24.615818 containerd[1720]: time="2025-05-27T03:23:24.614759370Z" level=info msg="StartContainer for \"8ebe354a137311359eacb9cbd3ab3f8ee2084670ee6f2c6549b1e9378c20f356\"" May 27 03:23:24.616973 containerd[1720]: time="2025-05-27T03:23:24.616951179Z" level=info msg="connecting to shim 8ebe354a137311359eacb9cbd3ab3f8ee2084670ee6f2c6549b1e9378c20f356" address="unix:///run/containerd/s/be4daabae533a6bf4124d26f8609c8a20d5cf24f3d6a180ca78e5a4201ea8eb0" protocol=ttrpc version=3 May 27 03:23:24.638023 systemd[1]: Started cri-containerd-8ebe354a137311359eacb9cbd3ab3f8ee2084670ee6f2c6549b1e9378c20f356.scope - libcontainer container 8ebe354a137311359eacb9cbd3ab3f8ee2084670ee6f2c6549b1e9378c20f356. May 27 03:23:24.797326 containerd[1720]: time="2025-05-27T03:23:24.797236468Z" level=info msg="StartContainer for \"8ebe354a137311359eacb9cbd3ab3f8ee2084670ee6f2c6549b1e9378c20f356\" returns successfully" May 27 03:23:24.886665 containerd[1720]: time="2025-05-27T03:23:24.886073130Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:24.888206 containerd[1720]: time="2025-05-27T03:23:24.888182727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 27 03:23:24.889609 containerd[1720]: time="2025-05-27T03:23:24.889589542Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 319.586478ms" May 27 03:23:24.889706 containerd[1720]: time="2025-05-27T03:23:24.889693366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 03:23:24.891955 containerd[1720]: time="2025-05-27T03:23:24.891854423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 03:23:24.897590 containerd[1720]: time="2025-05-27T03:23:24.897569512Z" level=info msg="CreateContainer within sandbox \"aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:23:24.910647 containerd[1720]: time="2025-05-27T03:23:24.910625244Z" level=info msg="Container f2f540bb163fc5d27aa120fb8ccc4b9e260f298c6b4c21ae003370ef89958a7c: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:24.937961 containerd[1720]: time="2025-05-27T03:23:24.937843892Z" level=info msg="CreateContainer within sandbox \"aaf0de016696551bd55edd86f8e494c65c8baedda351ee02ccf3cf8f8de24fef\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f2f540bb163fc5d27aa120fb8ccc4b9e260f298c6b4c21ae003370ef89958a7c\"" May 27 03:23:24.943645 containerd[1720]: time="2025-05-27T03:23:24.943110932Z" level=info msg="StartContainer for \"f2f540bb163fc5d27aa120fb8ccc4b9e260f298c6b4c21ae003370ef89958a7c\"" May 27 03:23:24.944693 containerd[1720]: time="2025-05-27T03:23:24.944668715Z" level=info msg="connecting to shim f2f540bb163fc5d27aa120fb8ccc4b9e260f298c6b4c21ae003370ef89958a7c" address="unix:///run/containerd/s/31366a5cc3b95808b0e7bac5bbabae465959f7ed96aee5d80ce905e621899fda" protocol=ttrpc version=3 May 27 03:23:24.974403 systemd[1]: Started cri-containerd-f2f540bb163fc5d27aa120fb8ccc4b9e260f298c6b4c21ae003370ef89958a7c.scope - libcontainer container f2f540bb163fc5d27aa120fb8ccc4b9e260f298c6b4c21ae003370ef89958a7c. May 27 03:23:25.239088 containerd[1720]: time="2025-05-27T03:23:25.239046066Z" level=info msg="StartContainer for \"f2f540bb163fc5d27aa120fb8ccc4b9e260f298c6b4c21ae003370ef89958a7c\" returns successfully" May 27 03:23:25.879979 kubelet[3072]: I0527 03:23:25.879804 3072 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:23:25.886579 systemd-networkd[1599]: calia860cb15814: Gained IPv6LL May 27 03:23:25.898000 kubelet[3072]: I0527 03:23:25.897746 3072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f58557fbc-zdhkv" podStartSLOduration=28.371115981 podStartE2EDuration="31.897725567s" podCreationTimestamp="2025-05-27 03:22:54 +0000 UTC" firstStartedPulling="2025-05-27 03:23:21.364672992 +0000 UTC m=+41.733584747" lastFinishedPulling="2025-05-27 03:23:24.891282575 +0000 UTC m=+45.260194333" observedRunningTime="2025-05-27 03:23:25.8976774 +0000 UTC m=+46.266589155" watchObservedRunningTime="2025-05-27 03:23:25.897725567 +0000 UTC m=+46.266637426" May 27 03:23:25.898000 kubelet[3072]: I0527 03:23:25.897866 3072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f58557fbc-j8plw" podStartSLOduration=28.501683896 podStartE2EDuration="31.897860687s" podCreationTimestamp="2025-05-27 03:22:54 +0000 UTC" firstStartedPulling="2025-05-27 03:23:21.17355042 +0000 UTC m=+41.542462164" lastFinishedPulling="2025-05-27 03:23:24.569727207 +0000 UTC m=+44.938638955" observedRunningTime="2025-05-27 03:23:24.894243126 +0000 UTC m=+45.263154886" watchObservedRunningTime="2025-05-27 03:23:25.897860687 +0000 UTC m=+46.266772446" May 27 03:23:26.394258 containerd[1720]: time="2025-05-27T03:23:26.394214554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:26.398317 containerd[1720]: time="2025-05-27T03:23:26.398285958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 27 03:23:26.400501 containerd[1720]: time="2025-05-27T03:23:26.400472445Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:26.404900 containerd[1720]: time="2025-05-27T03:23:26.404326786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:26.405023 containerd[1720]: time="2025-05-27T03:23:26.405002434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.511960543s" May 27 03:23:26.405079 containerd[1720]: time="2025-05-27T03:23:26.405069100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 27 03:23:26.407376 containerd[1720]: time="2025-05-27T03:23:26.407079264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 03:23:26.413969 containerd[1720]: time="2025-05-27T03:23:26.413943318Z" level=info msg="CreateContainer within sandbox \"f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 03:23:26.436006 containerd[1720]: time="2025-05-27T03:23:26.435983892Z" level=info msg="Container b15bde9a4780e1f64eeaa4dbad1650f8a3b86f2ccc166377598075174e850ae9: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:26.455287 containerd[1720]: time="2025-05-27T03:23:26.455259549Z" level=info msg="CreateContainer within sandbox \"f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b15bde9a4780e1f64eeaa4dbad1650f8a3b86f2ccc166377598075174e850ae9\"" May 27 03:23:26.456068 containerd[1720]: time="2025-05-27T03:23:26.456005626Z" level=info msg="StartContainer for \"b15bde9a4780e1f64eeaa4dbad1650f8a3b86f2ccc166377598075174e850ae9\"" May 27 03:23:26.458402 containerd[1720]: time="2025-05-27T03:23:26.458335639Z" level=info msg="connecting to shim b15bde9a4780e1f64eeaa4dbad1650f8a3b86f2ccc166377598075174e850ae9" address="unix:///run/containerd/s/879185071c2f539f741de6f1abea91b1d14c5c885ebfe77dac301ecec18b5d0e" protocol=ttrpc version=3 May 27 03:23:26.489037 systemd[1]: Started cri-containerd-b15bde9a4780e1f64eeaa4dbad1650f8a3b86f2ccc166377598075174e850ae9.scope - libcontainer container b15bde9a4780e1f64eeaa4dbad1650f8a3b86f2ccc166377598075174e850ae9. May 27 03:23:26.554252 containerd[1720]: time="2025-05-27T03:23:26.554232013Z" level=info msg="StartContainer for \"b15bde9a4780e1f64eeaa4dbad1650f8a3b86f2ccc166377598075174e850ae9\" returns successfully" May 27 03:23:26.886170 kubelet[3072]: I0527 03:23:26.885978 3072 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:23:28.903123 containerd[1720]: time="2025-05-27T03:23:28.903062803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:28.906213 containerd[1720]: time="2025-05-27T03:23:28.905932202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 27 03:23:28.908234 containerd[1720]: time="2025-05-27T03:23:28.908196956Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:28.913118 containerd[1720]: time="2025-05-27T03:23:28.913085907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:28.914671 containerd[1720]: time="2025-05-27T03:23:28.914640962Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 2.507533356s" May 27 03:23:28.914754 containerd[1720]: time="2025-05-27T03:23:28.914678009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 27 03:23:28.916026 containerd[1720]: time="2025-05-27T03:23:28.916002572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 03:23:28.933733 containerd[1720]: time="2025-05-27T03:23:28.933667201Z" level=info msg="CreateContainer within sandbox \"ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 03:23:28.953029 containerd[1720]: time="2025-05-27T03:23:28.953001129Z" level=info msg="Container 05bdcf5d3dc5e9192df463d2888cea7c027008a262be9d7a407794373880c1f5: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:29.087458 containerd[1720]: time="2025-05-27T03:23:29.087412750Z" level=info msg="CreateContainer within sandbox \"ceb1ce75705a68dd0d7aa6f18e2bf4520ca07618e3ef19f0cb8c77cfe86ceaa5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"05bdcf5d3dc5e9192df463d2888cea7c027008a262be9d7a407794373880c1f5\"" May 27 03:23:29.088524 containerd[1720]: time="2025-05-27T03:23:29.088382351Z" level=info msg="StartContainer for \"05bdcf5d3dc5e9192df463d2888cea7c027008a262be9d7a407794373880c1f5\"" May 27 03:23:29.091120 containerd[1720]: time="2025-05-27T03:23:29.091041570Z" level=info msg="connecting to shim 05bdcf5d3dc5e9192df463d2888cea7c027008a262be9d7a407794373880c1f5" address="unix:///run/containerd/s/18ece9869af1a9eb12ca1c61c678cb36e3c146ce68da0b0d2622aaf549210ff2" protocol=ttrpc version=3 May 27 03:23:29.119944 systemd[1]: Started cri-containerd-05bdcf5d3dc5e9192df463d2888cea7c027008a262be9d7a407794373880c1f5.scope - libcontainer container 05bdcf5d3dc5e9192df463d2888cea7c027008a262be9d7a407794373880c1f5. May 27 03:23:29.282931 containerd[1720]: time="2025-05-27T03:23:29.282387540Z" level=info msg="StartContainer for \"05bdcf5d3dc5e9192df463d2888cea7c027008a262be9d7a407794373880c1f5\" returns successfully" May 27 03:23:29.973951 containerd[1720]: time="2025-05-27T03:23:29.973770925Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05bdcf5d3dc5e9192df463d2888cea7c027008a262be9d7a407794373880c1f5\" id:\"f86f3b92d8468ce1a93580557e81028e6304c4d9801c076b331218d89d411033\" pid:5257 exited_at:{seconds:1748316209 nanos:973031840}" May 27 03:23:29.991854 kubelet[3072]: I0527 03:23:29.991798 3072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8994977dd-fcvt7" podStartSLOduration=28.477414599 podStartE2EDuration="32.991778326s" podCreationTimestamp="2025-05-27 03:22:57 +0000 UTC" firstStartedPulling="2025-05-27 03:23:24.401231987 +0000 UTC m=+44.770143735" lastFinishedPulling="2025-05-27 03:23:28.91559571 +0000 UTC m=+49.284507462" observedRunningTime="2025-05-27 03:23:29.922157546 +0000 UTC m=+50.291069302" watchObservedRunningTime="2025-05-27 03:23:29.991778326 +0000 UTC m=+50.360690081" May 27 03:23:30.284208 containerd[1720]: time="2025-05-27T03:23:30.284102488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:30.286166 containerd[1720]: time="2025-05-27T03:23:30.286129057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 27 03:23:30.288410 containerd[1720]: time="2025-05-27T03:23:30.288369452Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:30.291227 containerd[1720]: time="2025-05-27T03:23:30.291188946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:30.291576 containerd[1720]: time="2025-05-27T03:23:30.291470880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 1.375436294s" May 27 03:23:30.291576 containerd[1720]: time="2025-05-27T03:23:30.291499752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 27 03:23:30.296763 containerd[1720]: time="2025-05-27T03:23:30.296738265Z" level=info msg="CreateContainer within sandbox \"f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 03:23:30.313126 containerd[1720]: time="2025-05-27T03:23:30.312039565Z" level=info msg="Container 9f5267d9f4ad28b446123f17dcab09b4ddcda7f7e1c35938570879217d591902: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:30.325725 containerd[1720]: time="2025-05-27T03:23:30.325701595Z" level=info msg="CreateContainer within sandbox \"f21c54feed689793ae0cd2c0b61c5ca975cd0e60a9e9e6d19326efa33727247f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9f5267d9f4ad28b446123f17dcab09b4ddcda7f7e1c35938570879217d591902\"" May 27 03:23:30.326694 containerd[1720]: time="2025-05-27T03:23:30.326672890Z" level=info msg="StartContainer for \"9f5267d9f4ad28b446123f17dcab09b4ddcda7f7e1c35938570879217d591902\"" May 27 03:23:30.328772 containerd[1720]: time="2025-05-27T03:23:30.328748451Z" level=info msg="connecting to shim 9f5267d9f4ad28b446123f17dcab09b4ddcda7f7e1c35938570879217d591902" address="unix:///run/containerd/s/879185071c2f539f741de6f1abea91b1d14c5c885ebfe77dac301ecec18b5d0e" protocol=ttrpc version=3 May 27 03:23:30.352013 systemd[1]: Started cri-containerd-9f5267d9f4ad28b446123f17dcab09b4ddcda7f7e1c35938570879217d591902.scope - libcontainer container 9f5267d9f4ad28b446123f17dcab09b4ddcda7f7e1c35938570879217d591902. May 27 03:23:30.381186 containerd[1720]: time="2025-05-27T03:23:30.381167922Z" level=info msg="StartContainer for \"9f5267d9f4ad28b446123f17dcab09b4ddcda7f7e1c35938570879217d591902\" returns successfully" May 27 03:23:30.798286 kubelet[3072]: I0527 03:23:30.798261 3072 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 03:23:30.798390 kubelet[3072]: I0527 03:23:30.798301 3072 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 03:23:30.924631 kubelet[3072]: I0527 03:23:30.924568 3072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-pft9h" podStartSLOduration=25.59064707 podStartE2EDuration="33.924548786s" podCreationTimestamp="2025-05-27 03:22:57 +0000 UTC" firstStartedPulling="2025-05-27 03:23:21.958170619 +0000 UTC m=+42.327082371" lastFinishedPulling="2025-05-27 03:23:30.292072334 +0000 UTC m=+50.660984087" observedRunningTime="2025-05-27 03:23:30.92442465 +0000 UTC m=+51.293336404" watchObservedRunningTime="2025-05-27 03:23:30.924548786 +0000 UTC m=+51.293460551" May 27 03:23:31.714780 containerd[1720]: time="2025-05-27T03:23:31.714729140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:23:31.882737 containerd[1720]: time="2025-05-27T03:23:31.882686443Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:23:31.885362 containerd[1720]: time="2025-05-27T03:23:31.885315879Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:23:31.885466 containerd[1720]: time="2025-05-27T03:23:31.885341199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:23:31.885569 kubelet[3072]: E0527 03:23:31.885523 3072 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:23:31.885954 kubelet[3072]: E0527 03:23:31.885585 3072 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:23:31.885954 kubelet[3072]: E0527 03:23:31.885722 3072 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:db16f07b00b84e98ae55f4b0d0db1dcc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-68kpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bdfd48f46-dwlzl_calico-system(8c6da0c4-086c-499a-b60b-ce24550cd879): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:23:31.887954 containerd[1720]: time="2025-05-27T03:23:31.887923041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:23:32.054712 containerd[1720]: time="2025-05-27T03:23:32.054573635Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:23:32.057124 containerd[1720]: time="2025-05-27T03:23:32.057095794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:23:32.057225 containerd[1720]: time="2025-05-27T03:23:32.057106683Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:23:32.057336 kubelet[3072]: E0527 03:23:32.057304 3072 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:23:32.057401 kubelet[3072]: E0527 03:23:32.057348 3072 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:23:32.057557 kubelet[3072]: E0527 03:23:32.057498 3072 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68kpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bdfd48f46-dwlzl_calico-system(8c6da0c4-086c-499a-b60b-ce24550cd879): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:23:32.058722 kubelet[3072]: E0527 03:23:32.058677 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6bdfd48f46-dwlzl" podUID="8c6da0c4-086c-499a-b60b-ce24550cd879" May 27 03:23:34.715608 containerd[1720]: time="2025-05-27T03:23:34.714758073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:23:34.920524 containerd[1720]: time="2025-05-27T03:23:34.920471098Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:23:35.131064 containerd[1720]: time="2025-05-27T03:23:35.130980042Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:23:35.131362 containerd[1720]: time="2025-05-27T03:23:35.131116990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:23:35.133599 kubelet[3072]: E0527 03:23:35.132908 3072 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:23:35.133599 kubelet[3072]: E0527 03:23:35.132983 3072 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:23:35.133599 kubelet[3072]: E0527 03:23:35.133173 3072 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mn4c2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-mlshg_calico-system(4d81e6cc-fb55-4862-b180-e72b2e08be0e): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:23:35.136512 kubelet[3072]: E0527 03:23:35.135095 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-mlshg" podUID="4d81e6cc-fb55-4862-b180-e72b2e08be0e" May 27 03:23:41.780899 containerd[1720]: time="2025-05-27T03:23:41.780836863Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05bdcf5d3dc5e9192df463d2888cea7c027008a262be9d7a407794373880c1f5\" id:\"fa230f300d293c5c089857d4629c03f98a3376ecd0c57c137f79c6702ad9842a\" pid:5327 exited_at:{seconds:1748316221 nanos:780386589}" May 27 03:23:43.718076 kubelet[3072]: E0527 03:23:43.718019 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6bdfd48f46-dwlzl" podUID="8c6da0c4-086c-499a-b60b-ce24550cd879" May 27 03:23:47.713298 kubelet[3072]: E0527 03:23:47.712916 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-mlshg" podUID="4d81e6cc-fb55-4862-b180-e72b2e08be0e" May 27 03:23:51.817126 containerd[1720]: time="2025-05-27T03:23:51.817069487Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99a9aebb958b59ce220b6a8c39e24a46322b7207e04a6237da90672aa9320dd1\" id:\"acc4913d1d93cd2012cc47d10a0a813eb3441b3c426fdd91c3ec341fb73c8de6\" pid:5354 exited_at:{seconds:1748316231 nanos:816037987}" May 27 03:23:53.277383 kubelet[3072]: I0527 03:23:53.277337 3072 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:23:56.715658 containerd[1720]: time="2025-05-27T03:23:56.715599691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:23:56.902897 containerd[1720]: time="2025-05-27T03:23:56.902690789Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:23:56.906358 containerd[1720]: time="2025-05-27T03:23:56.906219930Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:23:56.906358 containerd[1720]: time="2025-05-27T03:23:56.906328953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:23:56.906717 kubelet[3072]: E0527 03:23:56.906661 3072 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:23:56.907396 kubelet[3072]: E0527 03:23:56.906945 3072 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:23:56.907396 kubelet[3072]: E0527 03:23:56.907101 3072 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:db16f07b00b84e98ae55f4b0d0db1dcc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-68kpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bdfd48f46-dwlzl_calico-system(8c6da0c4-086c-499a-b60b-ce24550cd879): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:23:56.909148 containerd[1720]: time="2025-05-27T03:23:56.909100686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:23:57.105047 containerd[1720]: time="2025-05-27T03:23:57.104932920Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:23:57.107368 containerd[1720]: time="2025-05-27T03:23:57.107256409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:23:57.107368 containerd[1720]: time="2025-05-27T03:23:57.107347405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:23:57.107772 kubelet[3072]: E0527 03:23:57.107660 3072 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:23:57.107772 kubelet[3072]: E0527 03:23:57.107706 3072 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:23:57.108172 kubelet[3072]: E0527 03:23:57.108128 3072 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68kpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bdfd48f46-dwlzl_calico-system(8c6da0c4-086c-499a-b60b-ce24550cd879): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:23:57.109716 kubelet[3072]: E0527 03:23:57.109648 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6bdfd48f46-dwlzl" podUID="8c6da0c4-086c-499a-b60b-ce24550cd879" May 27 03:23:59.945786 containerd[1720]: time="2025-05-27T03:23:59.945732083Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05bdcf5d3dc5e9192df463d2888cea7c027008a262be9d7a407794373880c1f5\" id:\"8b38ab261680eb029b0d1e90d7341df180a62c494447a2fbca48318064fcacf0\" pid:5388 exited_at:{seconds:1748316239 nanos:945516839}" May 27 03:24:01.719903 containerd[1720]: time="2025-05-27T03:24:01.719600337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:24:01.881383 containerd[1720]: time="2025-05-27T03:24:01.881326080Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:01.883685 containerd[1720]: time="2025-05-27T03:24:01.883635311Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:01.883929 containerd[1720]: time="2025-05-27T03:24:01.883667266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:24:01.883966 kubelet[3072]: E0527 03:24:01.883897 3072 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:24:01.883966 kubelet[3072]: E0527 03:24:01.883947 3072 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:24:01.884345 kubelet[3072]: E0527 03:24:01.884116 3072 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mn4c2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-mlshg_calico-system(4d81e6cc-fb55-4862-b180-e72b2e08be0e): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:01.885704 kubelet[3072]: E0527 03:24:01.885657 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-mlshg" podUID="4d81e6cc-fb55-4862-b180-e72b2e08be0e" May 27 03:24:05.465395 kubelet[3072]: I0527 03:24:05.464741 3072 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:24:08.713905 kubelet[3072]: E0527 03:24:08.713827 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6bdfd48f46-dwlzl" podUID="8c6da0c4-086c-499a-b60b-ce24550cd879" May 27 03:24:16.712825 kubelet[3072]: E0527 03:24:16.712741 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-mlshg" podUID="4d81e6cc-fb55-4862-b180-e72b2e08be0e" May 27 03:24:19.714757 kubelet[3072]: E0527 03:24:19.714296 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6bdfd48f46-dwlzl" podUID="8c6da0c4-086c-499a-b60b-ce24550cd879" May 27 03:24:21.757760 containerd[1720]: time="2025-05-27T03:24:21.757694009Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99a9aebb958b59ce220b6a8c39e24a46322b7207e04a6237da90672aa9320dd1\" id:\"959154af69fa7d32b22b678f3c1b084c7e3374a0b2310e9cea3fdc70af43510e\" pid:5417 exited_at:{seconds:1748316261 nanos:757260595}" May 27 03:24:29.942982 containerd[1720]: time="2025-05-27T03:24:29.942923286Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05bdcf5d3dc5e9192df463d2888cea7c027008a262be9d7a407794373880c1f5\" id:\"5095b8c003affbd7941c56eb25c0c264b649b2b36fc6c48e79f61cfa71cae303\" pid:5440 exited_at:{seconds:1748316269 nanos:942665831}" May 27 03:24:31.713225 kubelet[3072]: E0527 03:24:31.713111 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-mlshg" podUID="4d81e6cc-fb55-4862-b180-e72b2e08be0e" May 27 03:24:31.716478 kubelet[3072]: E0527 03:24:31.716378 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6bdfd48f46-dwlzl" podUID="8c6da0c4-086c-499a-b60b-ce24550cd879" May 27 03:24:41.781141 containerd[1720]: time="2025-05-27T03:24:41.781076996Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05bdcf5d3dc5e9192df463d2888cea7c027008a262be9d7a407794373880c1f5\" id:\"830278de63cc15c47f23985d98dc4f2de94f955d31807b5cecae22d368e57845\" pid:5469 exited_at:{seconds:1748316281 nanos:780698165}" May 27 03:24:45.714479 containerd[1720]: time="2025-05-27T03:24:45.713769731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:24:45.881966 containerd[1720]: time="2025-05-27T03:24:45.881924060Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:45.884488 containerd[1720]: time="2025-05-27T03:24:45.884344353Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:45.884488 containerd[1720]: time="2025-05-27T03:24:45.884461092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:24:45.884804 kubelet[3072]: E0527 03:24:45.884752 3072 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:45.885805 kubelet[3072]: E0527 03:24:45.884810 3072 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:45.885805 kubelet[3072]: E0527 03:24:45.885347 3072 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:db16f07b00b84e98ae55f4b0d0db1dcc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-68kpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bdfd48f46-dwlzl_calico-system(8c6da0c4-086c-499a-b60b-ce24550cd879): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:45.888364 containerd[1720]: time="2025-05-27T03:24:45.888334708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:24:46.046468 containerd[1720]: time="2025-05-27T03:24:46.046361642Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:46.048899 containerd[1720]: time="2025-05-27T03:24:46.048822004Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:46.048899 containerd[1720]: time="2025-05-27T03:24:46.048868045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:24:46.049079 kubelet[3072]: E0527 03:24:46.049014 3072 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:46.049079 kubelet[3072]: E0527 03:24:46.049055 3072 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:46.049233 kubelet[3072]: E0527 03:24:46.049183 3072 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68kpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bdfd48f46-dwlzl_calico-system(8c6da0c4-086c-499a-b60b-ce24550cd879): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:46.050471 kubelet[3072]: E0527 03:24:46.050429 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6bdfd48f46-dwlzl" podUID="8c6da0c4-086c-499a-b60b-ce24550cd879" May 27 03:24:46.712924 containerd[1720]: time="2025-05-27T03:24:46.712745123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:24:46.874591 containerd[1720]: time="2025-05-27T03:24:46.874539169Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:46.877056 containerd[1720]: time="2025-05-27T03:24:46.877020207Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:46.877190 containerd[1720]: time="2025-05-27T03:24:46.877044715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:24:46.877409 kubelet[3072]: E0527 03:24:46.877279 3072 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:24:46.878398 kubelet[3072]: E0527 03:24:46.877345 3072 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:24:46.878525 kubelet[3072]: E0527 03:24:46.878308 3072 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mn4c2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-mlshg_calico-system(4d81e6cc-fb55-4862-b180-e72b2e08be0e): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:46.888271 kubelet[3072]: E0527 03:24:46.888232 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-mlshg" podUID="4d81e6cc-fb55-4862-b180-e72b2e08be0e" May 27 03:24:51.749774 containerd[1720]: time="2025-05-27T03:24:51.749602698Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99a9aebb958b59ce220b6a8c39e24a46322b7207e04a6237da90672aa9320dd1\" id:\"792fdc8512862eca7e3c8656c0e23791ccd6f4924e5221104b1045fc16aa6ce6\" pid:5502 exited_at:{seconds:1748316291 nanos:749244927}" May 27 03:24:57.713703 kubelet[3072]: E0527 03:24:57.713642 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-mlshg" podUID="4d81e6cc-fb55-4862-b180-e72b2e08be0e" May 27 03:24:57.717582 kubelet[3072]: E0527 03:24:57.717109 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6bdfd48f46-dwlzl" podUID="8c6da0c4-086c-499a-b60b-ce24550cd879" May 27 03:24:59.934355 containerd[1720]: time="2025-05-27T03:24:59.934300625Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05bdcf5d3dc5e9192df463d2888cea7c027008a262be9d7a407794373880c1f5\" id:\"736d0668ee3a5ce7bbf2fb9b1ff21aed00f6aa97ce6c56b5e8d345186ae4d4b9\" pid:5541 exited_at:{seconds:1748316299 nanos:934085382}" May 27 03:25:02.548142 systemd[1]: Started sshd@7-10.200.8.16:22-10.200.16.10:40102.service - OpenSSH per-connection server daemon (10.200.16.10:40102). May 27 03:25:03.196986 sshd[5554]: Accepted publickey for core from 10.200.16.10 port 40102 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:25:03.198951 sshd-session[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:03.205461 systemd-logind[1703]: New session 10 of user core. May 27 03:25:03.213263 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 03:25:03.704407 sshd[5556]: Connection closed by 10.200.16.10 port 40102 May 27 03:25:03.705016 sshd-session[5554]: pam_unix(sshd:session): session closed for user core May 27 03:25:03.708055 systemd[1]: sshd@7-10.200.8.16:22-10.200.16.10:40102.service: Deactivated successfully. May 27 03:25:03.710446 systemd[1]: session-10.scope: Deactivated successfully. May 27 03:25:03.715056 systemd-logind[1703]: Session 10 logged out. Waiting for processes to exit. May 27 03:25:03.718316 systemd-logind[1703]: Removed session 10. May 27 03:25:08.819918 systemd[1]: Started sshd@8-10.200.8.16:22-10.200.16.10:33854.service - OpenSSH per-connection server daemon (10.200.16.10:33854). May 27 03:25:09.459561 sshd[5570]: Accepted publickey for core from 10.200.16.10 port 33854 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:25:09.460839 sshd-session[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:09.465428 systemd-logind[1703]: New session 11 of user core. May 27 03:25:09.474025 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 03:25:09.956277 sshd[5572]: Connection closed by 10.200.16.10 port 33854 May 27 03:25:09.956815 sshd-session[5570]: pam_unix(sshd:session): session closed for user core May 27 03:25:09.960344 systemd[1]: sshd@8-10.200.8.16:22-10.200.16.10:33854.service: Deactivated successfully. May 27 03:25:09.962318 systemd[1]: session-11.scope: Deactivated successfully. May 27 03:25:09.963118 systemd-logind[1703]: Session 11 logged out. Waiting for processes to exit. May 27 03:25:09.964369 systemd-logind[1703]: Removed session 11. May 27 03:25:10.714122 kubelet[3072]: E0527 03:25:10.713441 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-mlshg" podUID="4d81e6cc-fb55-4862-b180-e72b2e08be0e" May 27 03:25:12.713725 kubelet[3072]: E0527 03:25:12.713645 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6bdfd48f46-dwlzl" podUID="8c6da0c4-086c-499a-b60b-ce24550cd879" May 27 03:25:15.077787 systemd[1]: Started sshd@9-10.200.8.16:22-10.200.16.10:33866.service - OpenSSH per-connection server daemon (10.200.16.10:33866). May 27 03:25:15.712133 sshd[5585]: Accepted publickey for core from 10.200.16.10 port 33866 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:25:15.714420 sshd-session[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:15.719104 systemd-logind[1703]: New session 12 of user core. May 27 03:25:15.724052 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 03:25:16.210858 sshd[5587]: Connection closed by 10.200.16.10 port 33866 May 27 03:25:16.211473 sshd-session[5585]: pam_unix(sshd:session): session closed for user core May 27 03:25:16.215150 systemd[1]: sshd@9-10.200.8.16:22-10.200.16.10:33866.service: Deactivated successfully. May 27 03:25:16.217249 systemd[1]: session-12.scope: Deactivated successfully. May 27 03:25:16.218029 systemd-logind[1703]: Session 12 logged out. Waiting for processes to exit. May 27 03:25:16.219347 systemd-logind[1703]: Removed session 12. May 27 03:25:16.322895 systemd[1]: Started sshd@10-10.200.8.16:22-10.200.16.10:33872.service - OpenSSH per-connection server daemon (10.200.16.10:33872). May 27 03:25:16.964242 sshd[5600]: Accepted publickey for core from 10.200.16.10 port 33872 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:25:16.965487 sshd-session[5600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:16.970239 systemd-logind[1703]: New session 13 of user core. May 27 03:25:16.976135 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 03:25:17.485760 sshd[5605]: Connection closed by 10.200.16.10 port 33872 May 27 03:25:17.486338 sshd-session[5600]: pam_unix(sshd:session): session closed for user core May 27 03:25:17.489918 systemd[1]: sshd@10-10.200.8.16:22-10.200.16.10:33872.service: Deactivated successfully. May 27 03:25:17.491904 systemd[1]: session-13.scope: Deactivated successfully. May 27 03:25:17.492716 systemd-logind[1703]: Session 13 logged out. Waiting for processes to exit. May 27 03:25:17.493917 systemd-logind[1703]: Removed session 13. May 27 03:25:17.605051 systemd[1]: Started sshd@11-10.200.8.16:22-10.200.16.10:33880.service - OpenSSH per-connection server daemon (10.200.16.10:33880). May 27 03:25:18.243394 sshd[5615]: Accepted publickey for core from 10.200.16.10 port 33880 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:25:18.244550 sshd-session[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:18.248450 systemd-logind[1703]: New session 14 of user core. May 27 03:25:18.253991 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 03:25:18.737341 sshd[5617]: Connection closed by 10.200.16.10 port 33880 May 27 03:25:18.738029 sshd-session[5615]: pam_unix(sshd:session): session closed for user core May 27 03:25:18.741031 systemd[1]: sshd@11-10.200.8.16:22-10.200.16.10:33880.service: Deactivated successfully. May 27 03:25:18.743371 systemd[1]: session-14.scope: Deactivated successfully. May 27 03:25:18.744726 systemd-logind[1703]: Session 14 logged out. Waiting for processes to exit. May 27 03:25:18.746199 systemd-logind[1703]: Removed session 14. May 27 03:25:21.768055 containerd[1720]: time="2025-05-27T03:25:21.767996297Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99a9aebb958b59ce220b6a8c39e24a46322b7207e04a6237da90672aa9320dd1\" id:\"30cebae9bd90c2ba9dba08fa511fcce6bca0bec24316441537633313e6c0cddb\" pid:5644 exit_status:1 exited_at:{seconds:1748316321 nanos:767628805}" May 27 03:25:23.714765 kubelet[3072]: E0527 03:25:23.714675 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6bdfd48f46-dwlzl" podUID="8c6da0c4-086c-499a-b60b-ce24550cd879" May 27 03:25:23.851730 systemd[1]: Started sshd@12-10.200.8.16:22-10.200.16.10:37198.service - OpenSSH per-connection server daemon (10.200.16.10:37198). May 27 03:25:24.489181 sshd[5656]: Accepted publickey for core from 10.200.16.10 port 37198 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:25:24.490427 sshd-session[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:24.495111 systemd-logind[1703]: New session 15 of user core. May 27 03:25:24.499066 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 03:25:24.712662 kubelet[3072]: E0527 03:25:24.712605 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-mlshg" podUID="4d81e6cc-fb55-4862-b180-e72b2e08be0e" May 27 03:25:24.987692 sshd[5658]: Connection closed by 10.200.16.10 port 37198 May 27 03:25:24.988280 sshd-session[5656]: pam_unix(sshd:session): session closed for user core May 27 03:25:24.991341 systemd[1]: sshd@12-10.200.8.16:22-10.200.16.10:37198.service: Deactivated successfully. May 27 03:25:24.993363 systemd[1]: session-15.scope: Deactivated successfully. May 27 03:25:24.995256 systemd-logind[1703]: Session 15 logged out. Waiting for processes to exit. May 27 03:25:24.996120 systemd-logind[1703]: Removed session 15. May 27 03:25:30.010472 containerd[1720]: time="2025-05-27T03:25:30.010373470Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05bdcf5d3dc5e9192df463d2888cea7c027008a262be9d7a407794373880c1f5\" id:\"3455263d5343749a09b648bddd6f7dca9f4ef88fed3bb2d11f5799d977966e76\" pid:5680 exited_at:{seconds:1748316330 nanos:10115637}" May 27 03:25:30.102112 systemd[1]: Started sshd@13-10.200.8.16:22-10.200.16.10:50752.service - OpenSSH per-connection server daemon (10.200.16.10:50752). May 27 03:25:30.739436 sshd[5690]: Accepted publickey for core from 10.200.16.10 port 50752 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:25:30.741007 sshd-session[5690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:30.750297 systemd-logind[1703]: New session 16 of user core. May 27 03:25:30.757911 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 03:25:31.253798 sshd[5692]: Connection closed by 10.200.16.10 port 50752 May 27 03:25:31.254334 sshd-session[5690]: pam_unix(sshd:session): session closed for user core May 27 03:25:31.259243 systemd[1]: sshd@13-10.200.8.16:22-10.200.16.10:50752.service: Deactivated successfully. May 27 03:25:31.261398 systemd[1]: session-16.scope: Deactivated successfully. May 27 03:25:31.262160 systemd-logind[1703]: Session 16 logged out. Waiting for processes to exit. May 27 03:25:31.264393 systemd-logind[1703]: Removed session 16. May 27 03:25:36.367783 systemd[1]: Started sshd@14-10.200.8.16:22-10.200.16.10:50764.service - OpenSSH per-connection server daemon (10.200.16.10:50764). May 27 03:25:36.712850 kubelet[3072]: E0527 03:25:36.712687 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-mlshg" podUID="4d81e6cc-fb55-4862-b180-e72b2e08be0e" May 27 03:25:37.004052 sshd[5704]: Accepted publickey for core from 10.200.16.10 port 50764 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:25:37.006929 sshd-session[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:37.015497 systemd-logind[1703]: New session 17 of user core. May 27 03:25:37.022522 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 03:25:37.556020 sshd[5706]: Connection closed by 10.200.16.10 port 50764 May 27 03:25:37.556718 sshd-session[5704]: pam_unix(sshd:session): session closed for user core May 27 03:25:37.560847 systemd[1]: sshd@14-10.200.8.16:22-10.200.16.10:50764.service: Deactivated successfully. May 27 03:25:37.562832 systemd[1]: session-17.scope: Deactivated successfully. May 27 03:25:37.563616 systemd-logind[1703]: Session 17 logged out. Waiting for processes to exit. May 27 03:25:37.565202 systemd-logind[1703]: Removed session 17. May 27 03:25:37.669074 systemd[1]: Started sshd@15-10.200.8.16:22-10.200.16.10:50772.service - OpenSSH per-connection server daemon (10.200.16.10:50772). May 27 03:25:37.712807 kubelet[3072]: E0527 03:25:37.712697 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6bdfd48f46-dwlzl" podUID="8c6da0c4-086c-499a-b60b-ce24550cd879" May 27 03:25:38.306989 sshd[5718]: Accepted publickey for core from 10.200.16.10 port 50772 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:25:38.308784 sshd-session[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:38.312980 systemd-logind[1703]: New session 18 of user core. May 27 03:25:38.320036 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 03:25:38.876278 sshd[5720]: Connection closed by 10.200.16.10 port 50772 May 27 03:25:38.876934 sshd-session[5718]: pam_unix(sshd:session): session closed for user core May 27 03:25:38.880854 systemd[1]: sshd@15-10.200.8.16:22-10.200.16.10:50772.service: Deactivated successfully. May 27 03:25:38.883289 systemd[1]: session-18.scope: Deactivated successfully. May 27 03:25:38.884335 systemd-logind[1703]: Session 18 logged out. Waiting for processes to exit. May 27 03:25:38.885522 systemd-logind[1703]: Removed session 18. May 27 03:25:38.989096 systemd[1]: Started sshd@16-10.200.8.16:22-10.200.16.10:41748.service - OpenSSH per-connection server daemon (10.200.16.10:41748). May 27 03:25:39.627795 sshd[5730]: Accepted publickey for core from 10.200.16.10 port 41748 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:25:39.629093 sshd-session[5730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:39.633767 systemd-logind[1703]: New session 19 of user core. May 27 03:25:39.639080 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 03:25:40.831228 sshd[5732]: Connection closed by 10.200.16.10 port 41748 May 27 03:25:40.831846 sshd-session[5730]: pam_unix(sshd:session): session closed for user core May 27 03:25:40.835058 systemd[1]: sshd@16-10.200.8.16:22-10.200.16.10:41748.service: Deactivated successfully. May 27 03:25:40.837129 systemd[1]: session-19.scope: Deactivated successfully. May 27 03:25:40.839490 systemd-logind[1703]: Session 19 logged out. Waiting for processes to exit. May 27 03:25:40.840419 systemd-logind[1703]: Removed session 19. May 27 03:25:40.943074 systemd[1]: Started sshd@17-10.200.8.16:22-10.200.16.10:41754.service - OpenSSH per-connection server daemon (10.200.16.10:41754). May 27 03:25:41.579536 sshd[5751]: Accepted publickey for core from 10.200.16.10 port 41754 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:25:41.581026 sshd-session[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:41.585854 systemd-logind[1703]: New session 20 of user core. May 27 03:25:41.593040 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 03:25:41.782211 containerd[1720]: time="2025-05-27T03:25:41.782166226Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05bdcf5d3dc5e9192df463d2888cea7c027008a262be9d7a407794373880c1f5\" id:\"9605c7afca4e08a1cfbda56587cf6399830fd455c8883deb87597bf3669f4eea\" pid:5767 exited_at:{seconds:1748316341 nanos:781931776}" May 27 03:25:42.140271 sshd[5753]: Connection closed by 10.200.16.10 port 41754 May 27 03:25:42.140812 sshd-session[5751]: pam_unix(sshd:session): session closed for user core May 27 03:25:42.143565 systemd[1]: sshd@17-10.200.8.16:22-10.200.16.10:41754.service: Deactivated successfully. May 27 03:25:42.145871 systemd[1]: session-20.scope: Deactivated successfully. May 27 03:25:42.148386 systemd-logind[1703]: Session 20 logged out. Waiting for processes to exit. May 27 03:25:42.149987 systemd-logind[1703]: Removed session 20. May 27 03:25:42.255958 systemd[1]: Started sshd@18-10.200.8.16:22-10.200.16.10:41756.service - OpenSSH per-connection server daemon (10.200.16.10:41756). May 27 03:25:42.890666 sshd[5785]: Accepted publickey for core from 10.200.16.10 port 41756 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:25:42.891909 sshd-session[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:42.896485 systemd-logind[1703]: New session 21 of user core. May 27 03:25:42.902312 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 03:25:43.383455 sshd[5787]: Connection closed by 10.200.16.10 port 41756 May 27 03:25:43.383998 sshd-session[5785]: pam_unix(sshd:session): session closed for user core May 27 03:25:43.387580 systemd[1]: sshd@18-10.200.8.16:22-10.200.16.10:41756.service: Deactivated successfully. May 27 03:25:43.389549 systemd[1]: session-21.scope: Deactivated successfully. May 27 03:25:43.390261 systemd-logind[1703]: Session 21 logged out. Waiting for processes to exit. May 27 03:25:43.391860 systemd-logind[1703]: Removed session 21. May 27 03:25:48.497646 systemd[1]: Started sshd@19-10.200.8.16:22-10.200.16.10:41764.service - OpenSSH per-connection server daemon (10.200.16.10:41764). May 27 03:25:49.132544 sshd[5804]: Accepted publickey for core from 10.200.16.10 port 41764 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:25:49.133827 sshd-session[5804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:49.138099 systemd-logind[1703]: New session 22 of user core. May 27 03:25:49.147045 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 03:25:49.631873 sshd[5806]: Connection closed by 10.200.16.10 port 41764 May 27 03:25:49.632454 sshd-session[5804]: pam_unix(sshd:session): session closed for user core May 27 03:25:49.635346 systemd[1]: sshd@19-10.200.8.16:22-10.200.16.10:41764.service: Deactivated successfully. May 27 03:25:49.637441 systemd[1]: session-22.scope: Deactivated successfully. May 27 03:25:49.639553 systemd-logind[1703]: Session 22 logged out. Waiting for processes to exit. May 27 03:25:49.640495 systemd-logind[1703]: Removed session 22. May 27 03:25:49.714843 kubelet[3072]: E0527 03:25:49.714801 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-mlshg" podUID="4d81e6cc-fb55-4862-b180-e72b2e08be0e" May 27 03:25:49.715579 kubelet[3072]: E0527 03:25:49.715497 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6bdfd48f46-dwlzl" podUID="8c6da0c4-086c-499a-b60b-ce24550cd879" May 27 03:25:51.752087 containerd[1720]: time="2025-05-27T03:25:51.752009744Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99a9aebb958b59ce220b6a8c39e24a46322b7207e04a6237da90672aa9320dd1\" id:\"4d2dce60533aaf496f67826b783c57707edcf4c98174ab533221c4395ebcc28a\" pid:5830 exited_at:{seconds:1748316351 nanos:751682997}" May 27 03:25:54.751769 systemd[1]: Started sshd@20-10.200.8.16:22-10.200.16.10:47306.service - OpenSSH per-connection server daemon (10.200.16.10:47306). May 27 03:25:55.392547 sshd[5842]: Accepted publickey for core from 10.200.16.10 port 47306 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:25:55.393784 sshd-session[5842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:55.397845 systemd-logind[1703]: New session 23 of user core. May 27 03:25:55.403064 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 03:25:55.887745 sshd[5844]: Connection closed by 10.200.16.10 port 47306 May 27 03:25:55.888292 sshd-session[5842]: pam_unix(sshd:session): session closed for user core May 27 03:25:55.891713 systemd[1]: sshd@20-10.200.8.16:22-10.200.16.10:47306.service: Deactivated successfully. May 27 03:25:55.893982 systemd[1]: session-23.scope: Deactivated successfully. May 27 03:25:55.895208 systemd-logind[1703]: Session 23 logged out. Waiting for processes to exit. May 27 03:25:55.896383 systemd-logind[1703]: Removed session 23. May 27 03:25:59.934406 containerd[1720]: time="2025-05-27T03:25:59.934345187Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05bdcf5d3dc5e9192df463d2888cea7c027008a262be9d7a407794373880c1f5\" id:\"74e68d571cd73ab3515c434ae2d0b1021eeb6d3544997b6302fc5bc3d2bbf231\" pid:5873 exited_at:{seconds:1748316359 nanos:934104580}" May 27 03:26:01.001667 systemd[1]: Started sshd@21-10.200.8.16:22-10.200.16.10:33876.service - OpenSSH per-connection server daemon (10.200.16.10:33876). May 27 03:26:01.635773 sshd[5883]: Accepted publickey for core from 10.200.16.10 port 33876 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:26:01.637068 sshd-session[5883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:26:01.641747 systemd-logind[1703]: New session 24 of user core. May 27 03:26:01.648044 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 03:26:02.128690 sshd[5885]: Connection closed by 10.200.16.10 port 33876 May 27 03:26:02.129215 sshd-session[5883]: pam_unix(sshd:session): session closed for user core May 27 03:26:02.132155 systemd[1]: sshd@21-10.200.8.16:22-10.200.16.10:33876.service: Deactivated successfully. May 27 03:26:02.134348 systemd[1]: session-24.scope: Deactivated successfully. May 27 03:26:02.135717 systemd-logind[1703]: Session 24 logged out. Waiting for processes to exit. May 27 03:26:02.136845 systemd-logind[1703]: Removed session 24. May 27 03:26:03.714151 kubelet[3072]: E0527 03:26:03.714009 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6bdfd48f46-dwlzl" podUID="8c6da0c4-086c-499a-b60b-ce24550cd879" May 27 03:26:04.712282 kubelet[3072]: E0527 03:26:04.712213 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-mlshg" podUID="4d81e6cc-fb55-4862-b180-e72b2e08be0e" May 27 03:26:07.243501 systemd[1]: Started sshd@22-10.200.8.16:22-10.200.16.10:33878.service - OpenSSH per-connection server daemon (10.200.16.10:33878). May 27 03:26:07.881212 sshd[5899]: Accepted publickey for core from 10.200.16.10 port 33878 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:26:07.882529 sshd-session[5899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:26:07.887425 systemd-logind[1703]: New session 25 of user core. May 27 03:26:07.894074 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 03:26:08.381890 sshd[5901]: Connection closed by 10.200.16.10 port 33878 May 27 03:26:08.382475 sshd-session[5899]: pam_unix(sshd:session): session closed for user core May 27 03:26:08.386493 systemd[1]: sshd@22-10.200.8.16:22-10.200.16.10:33878.service: Deactivated successfully. May 27 03:26:08.388787 systemd[1]: session-25.scope: Deactivated successfully. May 27 03:26:08.389616 systemd-logind[1703]: Session 25 logged out. Waiting for processes to exit. May 27 03:26:08.390915 systemd-logind[1703]: Removed session 25. May 27 03:26:13.500603 systemd[1]: Started sshd@23-10.200.8.16:22-10.200.16.10:42844.service - OpenSSH per-connection server daemon (10.200.16.10:42844). May 27 03:26:14.141744 sshd[5913]: Accepted publickey for core from 10.200.16.10 port 42844 ssh2: RSA SHA256:+Fe8XoeidIDpreT5xokg3fL/NOTS8jCIdPAwqh1eVaU May 27 03:26:14.143164 sshd-session[5913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:26:14.147853 systemd-logind[1703]: New session 26 of user core. May 27 03:26:14.155046 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 03:26:14.638577 sshd[5915]: Connection closed by 10.200.16.10 port 42844 May 27 03:26:14.639189 sshd-session[5913]: pam_unix(sshd:session): session closed for user core May 27 03:26:14.642859 systemd[1]: sshd@23-10.200.8.16:22-10.200.16.10:42844.service: Deactivated successfully. May 27 03:26:14.645082 systemd[1]: session-26.scope: Deactivated successfully. May 27 03:26:14.645785 systemd-logind[1703]: Session 26 logged out. Waiting for processes to exit. May 27 03:26:14.647340 systemd-logind[1703]: Removed session 26. May 27 03:26:16.713429 containerd[1720]: time="2025-05-27T03:26:16.713363697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:26:16.912988 containerd[1720]: time="2025-05-27T03:26:16.912927725Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:26:16.915500 containerd[1720]: time="2025-05-27T03:26:16.915419306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:26:16.915640 containerd[1720]: time="2025-05-27T03:26:16.915457787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:26:16.915912 kubelet[3072]: E0527 03:26:16.915818 3072 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:26:16.915912 kubelet[3072]: E0527 03:26:16.915901 3072 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:26:16.916626 kubelet[3072]: E0527 03:26:16.916289 3072 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:db16f07b00b84e98ae55f4b0d0db1dcc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-68kpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bdfd48f46-dwlzl_calico-system(8c6da0c4-086c-499a-b60b-ce24550cd879): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:26:16.919379 containerd[1720]: time="2025-05-27T03:26:16.919354711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:26:17.088918 containerd[1720]: time="2025-05-27T03:26:17.088772462Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:26:17.091456 containerd[1720]: time="2025-05-27T03:26:17.091417203Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:26:17.091597 containerd[1720]: time="2025-05-27T03:26:17.091441225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:26:17.091682 kubelet[3072]: E0527 03:26:17.091636 3072 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:26:17.091745 kubelet[3072]: E0527 03:26:17.091701 3072 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:26:17.091887 kubelet[3072]: E0527 03:26:17.091845 3072 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68kpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bdfd48f46-dwlzl_calico-system(8c6da0c4-086c-499a-b60b-ce24550cd879): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:26:17.093085 kubelet[3072]: E0527 03:26:17.093064 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6bdfd48f46-dwlzl" podUID="8c6da0c4-086c-499a-b60b-ce24550cd879" May 27 03:26:19.713623 containerd[1720]: time="2025-05-27T03:26:19.713380664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:26:19.895892 containerd[1720]: time="2025-05-27T03:26:19.895821419Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:26:19.898791 containerd[1720]: time="2025-05-27T03:26:19.898768803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:26:19.898867 containerd[1720]: time="2025-05-27T03:26:19.898773438Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:26:19.899059 kubelet[3072]: E0527 03:26:19.899008 3072 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:26:19.899370 kubelet[3072]: E0527 03:26:19.899070 3072 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:26:19.899370 kubelet[3072]: E0527 03:26:19.899237 3072 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mn4c2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-mlshg_calico-system(4d81e6cc-fb55-4862-b180-e72b2e08be0e): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:26:19.900419 kubelet[3072]: E0527 03:26:19.900387 3072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-mlshg" podUID="4d81e6cc-fb55-4862-b180-e72b2e08be0e"