May 27 17:03:04.061163 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] May 27 17:03:04.061181 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 27 15:31:23 -00 2025 May 27 17:03:04.061188 kernel: KASLR enabled May 27 17:03:04.061192 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 27 17:03:04.061197 kernel: printk: legacy bootconsole [pl11] enabled May 27 17:03:04.061201 kernel: efi: EFI v2.7 by EDK II May 27 17:03:04.061206 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e018 RNG=0x3fd5f998 MEMRESERVE=0x3e471598 May 27 17:03:04.061210 kernel: random: crng init done May 27 17:03:04.061214 kernel: secureboot: Secure boot disabled May 27 17:03:04.061218 kernel: ACPI: Early table checksum verification disabled May 27 17:03:04.061222 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 27 17:03:04.061226 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:03:04.061230 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:03:04.061235 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 27 17:03:04.061240 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:03:04.061244 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:03:04.061248 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:03:04.061254 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:03:04.061258 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:03:04.061262 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:03:04.061266 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 27 17:03:04.061270 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:03:04.061274 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 27 17:03:04.061278 kernel: ACPI: Use ACPI SPCR as default console: Yes May 27 17:03:04.061283 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug May 27 17:03:04.061287 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug May 27 17:03:04.061291 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug May 27 17:03:04.061295 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug May 27 17:03:04.061299 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug May 27 17:03:04.061304 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug May 27 17:03:04.061309 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug May 27 17:03:04.061313 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug May 27 17:03:04.061317 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug May 27 17:03:04.061321 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug May 27 17:03:04.061325 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug May 27 17:03:04.061329 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug May 27 17:03:04.061334 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] May 27 17:03:04.061338 kernel: NODE_DATA(0) allocated [mem 0x1bf7fddc0-0x1bf804fff] May 27 17:03:04.061342 kernel: Zone ranges: May 27 17:03:04.061346 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 27 17:03:04.061353 kernel: DMA32 empty May 27 17:03:04.061357 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 27 17:03:04.061362 kernel: Device empty May 27 17:03:04.061366 kernel: Movable zone start for each node May 27 17:03:04.061370 kernel: Early memory node ranges May 27 17:03:04.061375 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 27 17:03:04.061380 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] May 27 17:03:04.061384 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] May 27 17:03:04.061388 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] May 27 17:03:04.061393 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 27 17:03:04.061397 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 27 17:03:04.061401 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 27 17:03:04.061405 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 27 17:03:04.061410 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 27 17:03:04.061414 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 27 17:03:04.061418 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 27 17:03:04.061422 kernel: psci: probing for conduit method from ACPI. May 27 17:03:04.061428 kernel: psci: PSCIv1.1 detected in firmware. May 27 17:03:04.061432 kernel: psci: Using standard PSCI v0.2 function IDs May 27 17:03:04.061436 kernel: psci: MIGRATE_INFO_TYPE not supported. May 27 17:03:04.061440 kernel: psci: SMC Calling Convention v1.4 May 27 17:03:04.061445 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 27 17:03:04.061449 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 May 27 17:03:04.061453 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 27 17:03:04.061458 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 27 17:03:04.061462 kernel: pcpu-alloc: [0] 0 [0] 1 May 27 17:03:04.061466 kernel: Detected PIPT I-cache on CPU0 May 27 17:03:04.061471 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) May 27 17:03:04.061476 kernel: CPU features: detected: GIC system register CPU interface May 27 17:03:04.061480 kernel: CPU features: detected: Spectre-v4 May 27 17:03:04.061484 kernel: CPU features: detected: Spectre-BHB May 27 17:03:04.061488 kernel: CPU features: kernel page table isolation forced ON by KASLR May 27 17:03:04.061493 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 27 17:03:04.061497 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 May 27 17:03:04.061501 kernel: CPU features: detected: SSBS not fully self-synchronizing May 27 17:03:04.061506 kernel: alternatives: applying boot alternatives May 27 17:03:04.061511 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=4e706b869299e1c88703222069cdfa08c45ebce568f762053eea5b3f5f0939c3 May 27 17:03:04.061516 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 17:03:04.061520 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 17:03:04.061525 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 17:03:04.061530 kernel: Fallback order for Node 0: 0 May 27 17:03:04.061534 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 May 27 17:03:04.061538 kernel: Policy zone: Normal May 27 17:03:04.061542 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 17:03:04.061547 kernel: software IO TLB: area num 2. May 27 17:03:04.061551 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) May 27 17:03:04.061555 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 17:03:04.061559 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 17:03:04.061564 kernel: rcu: RCU event tracing is enabled. May 27 17:03:04.061569 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 17:03:04.061574 kernel: Trampoline variant of Tasks RCU enabled. May 27 17:03:04.061578 kernel: Tracing variant of Tasks RCU enabled. May 27 17:03:04.061583 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 17:03:04.061587 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 17:03:04.061591 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:03:04.061596 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:03:04.061600 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 27 17:03:04.061604 kernel: GICv3: 960 SPIs implemented May 27 17:03:04.061608 kernel: GICv3: 0 Extended SPIs implemented May 27 17:03:04.061613 kernel: Root IRQ handler: gic_handle_irq May 27 17:03:04.061617 kernel: GICv3: GICv3 features: 16 PPIs, RSS May 27 17:03:04.061621 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 May 27 17:03:04.061627 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 27 17:03:04.061631 kernel: ITS: No ITS available, not enabling LPIs May 27 17:03:04.061636 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 17:03:04.061640 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). May 27 17:03:04.061644 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 17:03:04.061649 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns May 27 17:03:04.061653 kernel: Console: colour dummy device 80x25 May 27 17:03:04.061658 kernel: printk: legacy console [tty1] enabled May 27 17:03:04.061662 kernel: ACPI: Core revision 20240827 May 27 17:03:04.061667 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) May 27 17:03:04.061672 kernel: pid_max: default: 32768 minimum: 301 May 27 17:03:04.061677 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 17:03:04.061681 kernel: landlock: Up and running. May 27 17:03:04.061685 kernel: SELinux: Initializing. May 27 17:03:04.061690 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 17:03:04.061695 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 17:03:04.061703 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 May 27 17:03:04.061709 kernel: Hyper-V: Host Build 10.0.26100.1254-1-0 May 27 17:03:04.061713 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 27 17:03:04.061718 kernel: rcu: Hierarchical SRCU implementation. May 27 17:03:04.061722 kernel: rcu: Max phase no-delay instances is 400. May 27 17:03:04.061727 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 17:03:04.061733 kernel: Remapping and enabling EFI services. May 27 17:03:04.061738 kernel: smp: Bringing up secondary CPUs ... May 27 17:03:04.061742 kernel: Detected PIPT I-cache on CPU1 May 27 17:03:04.061747 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 27 17:03:04.061752 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] May 27 17:03:04.061757 kernel: smp: Brought up 1 node, 2 CPUs May 27 17:03:04.061762 kernel: SMP: Total of 2 processors activated. May 27 17:03:04.061767 kernel: CPU: All CPU(s) started at EL1 May 27 17:03:04.061772 kernel: CPU features: detected: 32-bit EL0 Support May 27 17:03:04.061776 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 27 17:03:04.061781 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 27 17:03:04.061786 kernel: CPU features: detected: Common not Private translations May 27 17:03:04.061791 kernel: CPU features: detected: CRC32 instructions May 27 17:03:04.061795 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) May 27 17:03:04.061801 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 27 17:03:04.061806 kernel: CPU features: detected: LSE atomic instructions May 27 17:03:04.061810 kernel: CPU features: detected: Privileged Access Never May 27 17:03:04.061815 kernel: CPU features: detected: Speculation barrier (SB) May 27 17:03:04.061820 kernel: CPU features: detected: TLB range maintenance instructions May 27 17:03:04.061824 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 27 17:03:04.061829 kernel: CPU features: detected: Scalable Vector Extension May 27 17:03:04.061834 kernel: alternatives: applying system-wide alternatives May 27 17:03:04.061838 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 May 27 17:03:04.061844 kernel: SVE: maximum available vector length 16 bytes per vector May 27 17:03:04.061849 kernel: SVE: default vector length 16 bytes per vector May 27 17:03:04.061854 kernel: Memory: 3976112K/4194160K available (11072K kernel code, 2276K rwdata, 8936K rodata, 39424K init, 1034K bss, 213432K reserved, 0K cma-reserved) May 27 17:03:04.061859 kernel: devtmpfs: initialized May 27 17:03:04.061863 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 17:03:04.061868 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 17:03:04.061873 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 27 17:03:04.061878 kernel: 0 pages in range for non-PLT usage May 27 17:03:04.061892 kernel: 508544 pages in range for PLT usage May 27 17:03:04.061898 kernel: pinctrl core: initialized pinctrl subsystem May 27 17:03:04.061902 kernel: SMBIOS 3.1.0 present. May 27 17:03:04.061907 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 27 17:03:04.061912 kernel: DMI: Memory slots populated: 2/2 May 27 17:03:04.061917 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 17:03:04.061921 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 27 17:03:04.061926 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 27 17:03:04.061931 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 27 17:03:04.061936 kernel: audit: initializing netlink subsys (disabled) May 27 17:03:04.061941 kernel: audit: type=2000 audit(0.064:1): state=initialized audit_enabled=0 res=1 May 27 17:03:04.061946 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 17:03:04.061951 kernel: cpuidle: using governor menu May 27 17:03:04.061956 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 27 17:03:04.061960 kernel: ASID allocator initialised with 32768 entries May 27 17:03:04.061965 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 17:03:04.061970 kernel: Serial: AMBA PL011 UART driver May 27 17:03:04.061974 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 17:03:04.061979 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 27 17:03:04.061985 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 27 17:03:04.061990 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 27 17:03:04.061994 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 17:03:04.061999 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 27 17:03:04.062004 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 27 17:03:04.062009 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 27 17:03:04.062013 kernel: ACPI: Added _OSI(Module Device) May 27 17:03:04.062018 kernel: ACPI: Added _OSI(Processor Device) May 27 17:03:04.062022 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 17:03:04.062028 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 17:03:04.062033 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 17:03:04.062037 kernel: ACPI: Interpreter enabled May 27 17:03:04.062042 kernel: ACPI: Using GIC for interrupt routing May 27 17:03:04.062047 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 27 17:03:04.062052 kernel: printk: legacy console [ttyAMA0] enabled May 27 17:03:04.062056 kernel: printk: legacy bootconsole [pl11] disabled May 27 17:03:04.062061 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 27 17:03:04.062066 kernel: ACPI: CPU0 has been hot-added May 27 17:03:04.062071 kernel: ACPI: CPU1 has been hot-added May 27 17:03:04.062076 kernel: iommu: Default domain type: Translated May 27 17:03:04.062081 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 27 17:03:04.062086 kernel: efivars: Registered efivars operations May 27 17:03:04.062090 kernel: vgaarb: loaded May 27 17:03:04.062095 kernel: clocksource: Switched to clocksource arch_sys_counter May 27 17:03:04.062100 kernel: VFS: Disk quotas dquot_6.6.0 May 27 17:03:04.062104 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 17:03:04.062109 kernel: pnp: PnP ACPI init May 27 17:03:04.062115 kernel: pnp: PnP ACPI: found 0 devices May 27 17:03:04.062120 kernel: NET: Registered PF_INET protocol family May 27 17:03:04.062124 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 17:03:04.062129 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 17:03:04.062134 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 17:03:04.062138 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 17:03:04.062143 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 17:03:04.062148 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 17:03:04.062153 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 17:03:04.062158 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 17:03:04.062163 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 17:03:04.062168 kernel: PCI: CLS 0 bytes, default 64 May 27 17:03:04.062172 kernel: kvm [1]: HYP mode not available May 27 17:03:04.062177 kernel: Initialise system trusted keyrings May 27 17:03:04.062181 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 17:03:04.062186 kernel: Key type asymmetric registered May 27 17:03:04.062191 kernel: Asymmetric key parser 'x509' registered May 27 17:03:04.062195 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 27 17:03:04.062201 kernel: io scheduler mq-deadline registered May 27 17:03:04.062206 kernel: io scheduler kyber registered May 27 17:03:04.062210 kernel: io scheduler bfq registered May 27 17:03:04.062215 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 17:03:04.062220 kernel: thunder_xcv, ver 1.0 May 27 17:03:04.062224 kernel: thunder_bgx, ver 1.0 May 27 17:03:04.062229 kernel: nicpf, ver 1.0 May 27 17:03:04.062233 kernel: nicvf, ver 1.0 May 27 17:03:04.062348 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 27 17:03:04.062400 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-27T17:03:03 UTC (1748365383) May 27 17:03:04.062406 kernel: efifb: probing for efifb May 27 17:03:04.062411 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 27 17:03:04.062416 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 27 17:03:04.062420 kernel: efifb: scrolling: redraw May 27 17:03:04.062425 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 27 17:03:04.062430 kernel: Console: switching to colour frame buffer device 128x48 May 27 17:03:04.062435 kernel: fb0: EFI VGA frame buffer device May 27 17:03:04.062441 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 27 17:03:04.062445 kernel: hid: raw HID events driver (C) Jiri Kosina May 27 17:03:04.062450 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 27 17:03:04.062455 kernel: NET: Registered PF_INET6 protocol family May 27 17:03:04.062459 kernel: watchdog: NMI not fully supported May 27 17:03:04.062464 kernel: watchdog: Hard watchdog permanently disabled May 27 17:03:04.062469 kernel: Segment Routing with IPv6 May 27 17:03:04.062473 kernel: In-situ OAM (IOAM) with IPv6 May 27 17:03:04.062478 kernel: NET: Registered PF_PACKET protocol family May 27 17:03:04.062484 kernel: Key type dns_resolver registered May 27 17:03:04.062489 kernel: registered taskstats version 1 May 27 17:03:04.062494 kernel: Loading compiled-in X.509 certificates May 27 17:03:04.062498 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 8e5e45c34fa91568ef1fa3bdfd5a71a43b4c4580' May 27 17:03:04.062503 kernel: Demotion targets for Node 0: null May 27 17:03:04.062508 kernel: Key type .fscrypt registered May 27 17:03:04.062512 kernel: Key type fscrypt-provisioning registered May 27 17:03:04.062517 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 17:03:04.062522 kernel: ima: Allocated hash algorithm: sha1 May 27 17:03:04.062528 kernel: ima: No architecture policies found May 27 17:03:04.062532 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 27 17:03:04.062537 kernel: clk: Disabling unused clocks May 27 17:03:04.062542 kernel: PM: genpd: Disabling unused power domains May 27 17:03:04.062547 kernel: Warning: unable to open an initial console. May 27 17:03:04.062551 kernel: Freeing unused kernel memory: 39424K May 27 17:03:04.062556 kernel: Run /init as init process May 27 17:03:04.062561 kernel: with arguments: May 27 17:03:04.062565 kernel: /init May 27 17:03:04.062571 kernel: with environment: May 27 17:03:04.062575 kernel: HOME=/ May 27 17:03:04.062580 kernel: TERM=linux May 27 17:03:04.062585 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 17:03:04.062591 systemd[1]: Successfully made /usr/ read-only. May 27 17:03:04.062597 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:03:04.062603 systemd[1]: Detected virtualization microsoft. May 27 17:03:04.062609 systemd[1]: Detected architecture arm64. May 27 17:03:04.062614 systemd[1]: Running in initrd. May 27 17:03:04.062619 systemd[1]: No hostname configured, using default hostname. May 27 17:03:04.062624 systemd[1]: Hostname set to . May 27 17:03:04.062629 systemd[1]: Initializing machine ID from random generator. May 27 17:03:04.062634 systemd[1]: Queued start job for default target initrd.target. May 27 17:03:04.062639 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:03:04.062644 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:03:04.062651 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 17:03:04.062656 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:03:04.062661 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 17:03:04.062667 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 17:03:04.062673 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 17:03:04.062678 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 17:03:04.062683 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:03:04.062689 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:03:04.062694 systemd[1]: Reached target paths.target - Path Units. May 27 17:03:04.062699 systemd[1]: Reached target slices.target - Slice Units. May 27 17:03:04.062704 systemd[1]: Reached target swap.target - Swaps. May 27 17:03:04.062709 systemd[1]: Reached target timers.target - Timer Units. May 27 17:03:04.062714 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:03:04.062719 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:03:04.062725 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 17:03:04.062730 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 17:03:04.062736 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:03:04.062741 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:03:04.062746 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:03:04.062751 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:03:04.062757 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 17:03:04.062762 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:03:04.062767 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 17:03:04.062772 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 17:03:04.062778 systemd[1]: Starting systemd-fsck-usr.service... May 27 17:03:04.062783 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:03:04.062788 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:03:04.062793 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:03:04.062811 systemd-journald[224]: Collecting audit messages is disabled. May 27 17:03:04.062827 systemd-journald[224]: Journal started May 27 17:03:04.062840 systemd-journald[224]: Runtime Journal (/run/log/journal/91a6632076e44720884e6cfb4cc9359b) is 8M, max 78.5M, 70.5M free. May 27 17:03:04.072246 systemd-modules-load[226]: Inserted module 'overlay' May 27 17:03:04.083212 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:03:04.088261 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 17:03:04.106610 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 17:03:04.106634 kernel: Bridge firewalling registered May 27 17:03:04.101569 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:03:04.105861 systemd-modules-load[226]: Inserted module 'br_netfilter' May 27 17:03:04.111528 systemd[1]: Finished systemd-fsck-usr.service. May 27 17:03:04.119422 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:03:04.126517 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:03:04.136374 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 17:03:04.146597 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:03:04.164917 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:03:04.173655 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:03:04.193101 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:03:04.201123 systemd-tmpfiles[245]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 17:03:04.208497 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:03:04.214663 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:03:04.219809 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:03:04.231061 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 17:03:04.259071 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:03:04.273605 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:03:04.287494 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=4e706b869299e1c88703222069cdfa08c45ebce568f762053eea5b3f5f0939c3 May 27 17:03:04.314128 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:03:04.334003 systemd-resolved[262]: Positive Trust Anchors: May 27 17:03:04.334015 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:03:04.334034 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:03:04.338772 systemd-resolved[262]: Defaulting to hostname 'linux'. May 27 17:03:04.339584 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:03:04.350401 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:03:04.394894 kernel: SCSI subsystem initialized May 27 17:03:04.399895 kernel: Loading iSCSI transport class v2.0-870. May 27 17:03:04.409910 kernel: iscsi: registered transport (tcp) May 27 17:03:04.422480 kernel: iscsi: registered transport (qla4xxx) May 27 17:03:04.422499 kernel: QLogic iSCSI HBA Driver May 27 17:03:04.436051 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:03:04.462915 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:03:04.469395 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:03:04.515939 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 17:03:04.523066 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 17:03:04.579923 kernel: raid6: neonx8 gen() 18540 MB/s May 27 17:03:04.598905 kernel: raid6: neonx4 gen() 18577 MB/s May 27 17:03:04.618919 kernel: raid6: neonx2 gen() 17043 MB/s May 27 17:03:04.637906 kernel: raid6: neonx1 gen() 15019 MB/s May 27 17:03:04.656923 kernel: raid6: int64x8 gen() 10555 MB/s May 27 17:03:04.675922 kernel: raid6: int64x4 gen() 10609 MB/s May 27 17:03:04.694901 kernel: raid6: int64x2 gen() 8979 MB/s May 27 17:03:04.715803 kernel: raid6: int64x1 gen() 7001 MB/s May 27 17:03:04.715828 kernel: raid6: using algorithm neonx4 gen() 18577 MB/s May 27 17:03:04.739142 kernel: raid6: .... xor() 15148 MB/s, rmw enabled May 27 17:03:04.739185 kernel: raid6: using neon recovery algorithm May 27 17:03:04.746772 kernel: xor: measuring software checksum speed May 27 17:03:04.746795 kernel: 8regs : 28579 MB/sec May 27 17:03:04.749277 kernel: 32regs : 28771 MB/sec May 27 17:03:04.751619 kernel: arm64_neon : 37656 MB/sec May 27 17:03:04.754540 kernel: xor: using function: arm64_neon (37656 MB/sec) May 27 17:03:04.792904 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 17:03:04.799181 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 17:03:04.809065 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:03:04.828499 systemd-udevd[474]: Using default interface naming scheme 'v255'. May 27 17:03:04.831473 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:03:04.845835 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 17:03:04.877382 dracut-pre-trigger[488]: rd.md=0: removing MD RAID activation May 27 17:03:04.899268 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:03:04.905121 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:03:04.955267 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:03:04.962573 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 17:03:05.023902 kernel: hv_vmbus: Vmbus version:5.3 May 27 17:03:05.037222 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:03:05.037336 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:03:05.059677 kernel: pps_core: LinuxPPS API ver. 1 registered May 27 17:03:05.059701 kernel: hv_vmbus: registering driver hyperv_keyboard May 27 17:03:05.059708 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 27 17:03:05.067938 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:03:05.097724 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 27 17:03:05.097745 kernel: hv_vmbus: registering driver hid_hyperv May 27 17:03:05.097752 kernel: hv_vmbus: registering driver hv_netvsc May 27 17:03:05.097759 kernel: hv_vmbus: registering driver hv_storvsc May 27 17:03:05.097776 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 27 17:03:05.097782 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 27 17:03:05.097958 kernel: scsi host0: storvsc_host_t May 27 17:03:05.098049 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 27 17:03:05.103241 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:03:05.111209 kernel: scsi host1: storvsc_host_t May 27 17:03:05.111386 kernel: PTP clock support registered May 27 17:03:05.123971 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 May 27 17:03:05.124674 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:03:05.134614 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:03:05.134724 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:03:05.157025 kernel: hv_utils: Registering HyperV Utility Driver May 27 17:03:05.157098 kernel: hv_vmbus: registering driver hv_utils May 27 17:03:05.159899 kernel: hv_utils: Heartbeat IC version 3.0 May 27 17:03:05.162388 kernel: hv_utils: Shutdown IC version 3.2 May 27 17:03:05.165205 kernel: hv_utils: TimeSync IC version 4.0 May 27 17:03:05.165776 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:03:04.945756 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 27 17:03:04.957613 kernel: hv_netvsc 00224879-2645-0022-4879-264500224879 eth0: VF slot 1 added May 27 17:03:04.957743 systemd-journald[224]: Time jumped backwards, rotating. May 27 17:03:04.957772 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 27 17:03:04.957874 kernel: sd 0:0:0:0: [sda] Write Protect is off May 27 17:03:04.929243 systemd-resolved[262]: Clock change detected. Flushing caches. May 27 17:03:04.969628 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 27 17:03:04.969799 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 27 17:03:04.931200 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:03:04.981925 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 27 17:03:04.982965 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#138 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 27 17:03:04.989661 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:03:05.007620 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 17:03:05.007676 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 27 17:03:05.017203 kernel: hv_vmbus: registering driver hv_pci May 27 17:03:05.017265 kernel: hv_pci 19704f56-0781-439c-90c2-a1897baf681c: PCI VMBus probing: Using version 0x10004 May 27 17:03:05.022448 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 27 17:03:05.022629 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 27 17:03:05.022645 kernel: hv_pci 19704f56-0781-439c-90c2-a1897baf681c: PCI host bridge to bus 0781:00 May 27 17:03:05.026841 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 27 17:03:05.027015 kernel: pci_bus 0781:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 27 17:03:05.038497 kernel: pci_bus 0781:00: No busn resource found for root bus, will use [bus 00-ff] May 27 17:03:05.043845 kernel: pci 0781:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint May 27 17:03:05.054328 kernel: pci 0781:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] May 27 17:03:05.054414 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#147 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 27 17:03:05.054566 kernel: pci 0781:00:02.0: enabling Extended Tags May 27 17:03:05.074914 kernel: pci 0781:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 0781:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) May 27 17:03:05.083021 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#187 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 27 17:03:05.083208 kernel: pci_bus 0781:00: busn_res: [bus 00-ff] end is updated to 00 May 27 17:03:05.090042 kernel: pci 0781:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned May 27 17:03:05.151904 kernel: mlx5_core 0781:00:02.0: enabling device (0000 -> 0002) May 27 17:03:05.159127 kernel: mlx5_core 0781:00:02.0: PTM is not supported by PCIe May 27 17:03:05.159311 kernel: mlx5_core 0781:00:02.0: firmware version: 16.30.5006 May 27 17:03:05.328119 kernel: hv_netvsc 00224879-2645-0022-4879-264500224879 eth0: VF registering: eth1 May 27 17:03:05.328346 kernel: mlx5_core 0781:00:02.0 eth1: joined to eth0 May 27 17:03:05.333980 kernel: mlx5_core 0781:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) May 27 17:03:05.342848 kernel: mlx5_core 0781:00:02.0 enP1921s1: renamed from eth1 May 27 17:03:05.735412 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 27 17:03:05.819666 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 27 17:03:05.850480 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 27 17:03:05.875119 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 27 17:03:05.880126 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 27 17:03:05.891512 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 17:03:05.900983 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:03:05.908561 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:03:05.917767 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:03:05.927030 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 17:03:05.953622 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 17:03:05.970617 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 27 17:03:05.981925 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 17:03:05.983959 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 17:03:07.007726 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#163 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 27 17:03:07.020412 disk-uuid[662]: The operation has completed successfully. May 27 17:03:07.024088 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 17:03:07.087385 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 17:03:07.087484 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 17:03:07.113524 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 17:03:07.130119 sh[827]: Success May 27 17:03:07.166145 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 17:03:07.166210 kernel: device-mapper: uevent: version 1.0.3 May 27 17:03:07.170739 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 17:03:07.180870 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 27 17:03:07.408489 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 17:03:07.418192 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 17:03:07.435129 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 17:03:07.451849 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 17:03:07.458836 kernel: BTRFS: device fsid 3c8c76ef-f1da-40fe-979d-11bdf765e403 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (845) May 27 17:03:07.468209 kernel: BTRFS info (device dm-0): first mount of filesystem 3c8c76ef-f1da-40fe-979d-11bdf765e403 May 27 17:03:07.468250 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 27 17:03:07.471189 kernel: BTRFS info (device dm-0): using free-space-tree May 27 17:03:08.001639 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 17:03:08.006108 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 17:03:08.013059 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 17:03:08.013849 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 17:03:08.033661 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 17:03:08.056877 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (880) May 27 17:03:08.066914 kernel: BTRFS info (device sda6): first mount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:03:08.066974 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 27 17:03:08.070077 kernel: BTRFS info (device sda6): using free-space-tree May 27 17:03:08.123849 kernel: BTRFS info (device sda6): last unmount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:03:08.124714 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:03:08.129889 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 17:03:08.139732 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 17:03:08.161149 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:03:08.196034 systemd-networkd[1014]: lo: Link UP May 27 17:03:08.196045 systemd-networkd[1014]: lo: Gained carrier May 27 17:03:08.197359 systemd-networkd[1014]: Enumeration completed May 27 17:03:08.198044 systemd-networkd[1014]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:03:08.198047 systemd-networkd[1014]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:03:08.198679 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:03:08.205700 systemd[1]: Reached target network.target - Network. May 27 17:03:08.253857 kernel: mlx5_core 0781:00:02.0 enP1921s1: Link up May 27 17:03:08.285848 kernel: hv_netvsc 00224879-2645-0022-4879-264500224879 eth0: Data path switched to VF: enP1921s1 May 27 17:03:08.286627 systemd-networkd[1014]: enP1921s1: Link UP May 27 17:03:08.286779 systemd-networkd[1014]: eth0: Link UP May 27 17:03:08.286917 systemd-networkd[1014]: eth0: Gained carrier May 27 17:03:08.286928 systemd-networkd[1014]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:03:08.304306 systemd-networkd[1014]: enP1921s1: Gained carrier May 27 17:03:08.314856 systemd-networkd[1014]: eth0: DHCPv4 address 10.200.20.19/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 27 17:03:09.447945 systemd-networkd[1014]: enP1921s1: Gained IPv6LL May 27 17:03:09.573213 ignition[1013]: Ignition 2.21.0 May 27 17:03:09.573226 ignition[1013]: Stage: fetch-offline May 27 17:03:09.575577 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:03:09.573309 ignition[1013]: no configs at "/usr/lib/ignition/base.d" May 27 17:03:09.581327 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 17:03:09.573315 ignition[1013]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:03:09.573425 ignition[1013]: parsed url from cmdline: "" May 27 17:03:09.573427 ignition[1013]: no config URL provided May 27 17:03:09.573431 ignition[1013]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:03:09.573435 ignition[1013]: no config at "/usr/lib/ignition/user.ign" May 27 17:03:09.573439 ignition[1013]: failed to fetch config: resource requires networking May 27 17:03:09.573585 ignition[1013]: Ignition finished successfully May 27 17:03:09.614205 ignition[1023]: Ignition 2.21.0 May 27 17:03:09.614211 ignition[1023]: Stage: fetch May 27 17:03:09.614380 ignition[1023]: no configs at "/usr/lib/ignition/base.d" May 27 17:03:09.614388 ignition[1023]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:03:09.614465 ignition[1023]: parsed url from cmdline: "" May 27 17:03:09.614468 ignition[1023]: no config URL provided May 27 17:03:09.614471 ignition[1023]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:03:09.614476 ignition[1023]: no config at "/usr/lib/ignition/user.ign" May 27 17:03:09.614501 ignition[1023]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 27 17:03:09.683611 ignition[1023]: GET result: OK May 27 17:03:09.684224 ignition[1023]: config has been read from IMDS userdata May 27 17:03:09.684249 ignition[1023]: parsing config with SHA512: 3858eebfc929f41b28da7734cc441b715d622b96b18c6f2906fdea3af53454a8add5d88fccad073ab19464f0b149f611b50baaa7a02ca6c08ae20453e0a12fa8 May 27 17:03:09.690081 unknown[1023]: fetched base config from "system" May 27 17:03:09.690100 unknown[1023]: fetched base config from "system" May 27 17:03:09.690340 ignition[1023]: fetch: fetch complete May 27 17:03:09.690104 unknown[1023]: fetched user config from "azure" May 27 17:03:09.690344 ignition[1023]: fetch: fetch passed May 27 17:03:09.694533 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 17:03:09.690397 ignition[1023]: Ignition finished successfully May 27 17:03:09.702730 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 17:03:09.734889 ignition[1029]: Ignition 2.21.0 May 27 17:03:09.734902 ignition[1029]: Stage: kargs May 27 17:03:09.735391 ignition[1029]: no configs at "/usr/lib/ignition/base.d" May 27 17:03:09.735404 ignition[1029]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:03:09.745565 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 17:03:09.736331 ignition[1029]: kargs: kargs passed May 27 17:03:09.753800 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 17:03:09.736386 ignition[1029]: Ignition finished successfully May 27 17:03:09.777083 ignition[1036]: Ignition 2.21.0 May 27 17:03:09.777094 ignition[1036]: Stage: disks May 27 17:03:09.781017 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 17:03:09.777261 ignition[1036]: no configs at "/usr/lib/ignition/base.d" May 27 17:03:09.788278 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 17:03:09.777268 ignition[1036]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:03:09.795845 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 17:03:09.778196 ignition[1036]: disks: disks passed May 27 17:03:09.803307 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:03:09.778251 ignition[1036]: Ignition finished successfully May 27 17:03:09.811373 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:03:09.819045 systemd[1]: Reached target basic.target - Basic System. May 27 17:03:09.827883 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 17:03:09.918912 systemd-fsck[1045]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks May 27 17:03:09.926670 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 17:03:09.932392 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 17:03:10.087972 systemd-networkd[1014]: eth0: Gained IPv6LL May 27 17:03:10.172841 kernel: EXT4-fs (sda9): mounted filesystem a5483afc-8426-4c3e-85ef-8146f9077e7d r/w with ordered data mode. Quota mode: none. May 27 17:03:10.174203 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 17:03:10.177588 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 17:03:10.203447 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:03:10.212947 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 17:03:10.221880 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 27 17:03:10.231958 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 17:03:10.231998 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:03:10.259694 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (1059) May 27 17:03:10.259714 kernel: BTRFS info (device sda6): first mount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:03:10.242268 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 17:03:10.278677 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 27 17:03:10.278698 kernel: BTRFS info (device sda6): using free-space-tree May 27 17:03:10.274758 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 17:03:10.286281 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:03:11.039635 coreos-metadata[1061]: May 27 17:03:11.039 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 27 17:03:11.046511 coreos-metadata[1061]: May 27 17:03:11.046 INFO Fetch successful May 27 17:03:11.046511 coreos-metadata[1061]: May 27 17:03:11.046 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 27 17:03:11.059392 coreos-metadata[1061]: May 27 17:03:11.059 INFO Fetch successful May 27 17:03:11.088992 coreos-metadata[1061]: May 27 17:03:11.088 INFO wrote hostname ci-4344.0.0-a-f939a1e004 to /sysroot/etc/hostname May 27 17:03:11.095815 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 27 17:03:11.234697 initrd-setup-root[1090]: cut: /sysroot/etc/passwd: No such file or directory May 27 17:03:11.293135 initrd-setup-root[1097]: cut: /sysroot/etc/group: No such file or directory May 27 17:03:11.299889 initrd-setup-root[1104]: cut: /sysroot/etc/shadow: No such file or directory May 27 17:03:11.304648 initrd-setup-root[1111]: cut: /sysroot/etc/gshadow: No such file or directory May 27 17:03:12.437932 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 17:03:12.444378 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 17:03:12.456523 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 17:03:12.465537 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 17:03:12.474417 kernel: BTRFS info (device sda6): last unmount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:03:12.491867 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 17:03:12.496393 ignition[1179]: INFO : Ignition 2.21.0 May 27 17:03:12.496393 ignition[1179]: INFO : Stage: mount May 27 17:03:12.496393 ignition[1179]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:03:12.496393 ignition[1179]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:03:12.496393 ignition[1179]: INFO : mount: mount passed May 27 17:03:12.496393 ignition[1179]: INFO : Ignition finished successfully May 27 17:03:12.500265 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 17:03:12.508516 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 17:03:12.535089 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:03:12.556998 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (1190) May 27 17:03:12.566010 kernel: BTRFS info (device sda6): first mount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:03:12.566055 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 27 17:03:12.568912 kernel: BTRFS info (device sda6): using free-space-tree May 27 17:03:12.584574 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:03:12.610104 ignition[1208]: INFO : Ignition 2.21.0 May 27 17:03:12.610104 ignition[1208]: INFO : Stage: files May 27 17:03:12.617675 ignition[1208]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:03:12.617675 ignition[1208]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:03:12.617675 ignition[1208]: DEBUG : files: compiled without relabeling support, skipping May 27 17:03:12.631025 ignition[1208]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 17:03:12.636715 ignition[1208]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 17:03:12.669496 ignition[1208]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 17:03:12.674783 ignition[1208]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 17:03:12.674783 ignition[1208]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 17:03:12.670549 unknown[1208]: wrote ssh authorized keys file for user: core May 27 17:03:12.746083 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 27 17:03:12.753827 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 27 17:03:12.911994 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 17:03:13.105387 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 27 17:03:13.105387 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 17:03:13.119707 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 17:03:13.119707 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 17:03:13.119707 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 17:03:13.119707 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:03:13.119707 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:03:13.119707 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:03:13.119707 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:03:13.166940 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:03:13.166940 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:03:13.166940 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 27 17:03:13.166940 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 27 17:03:13.166940 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 27 17:03:13.166940 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 May 27 17:03:13.678753 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 17:03:13.900893 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 27 17:03:13.900893 ignition[1208]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 17:03:13.936976 ignition[1208]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:03:13.973747 ignition[1208]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:03:13.973747 ignition[1208]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 17:03:13.987033 ignition[1208]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 27 17:03:13.987033 ignition[1208]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 27 17:03:13.987033 ignition[1208]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 17:03:13.987033 ignition[1208]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 17:03:13.987033 ignition[1208]: INFO : files: files passed May 27 17:03:13.987033 ignition[1208]: INFO : Ignition finished successfully May 27 17:03:13.982387 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 17:03:13.992027 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 17:03:14.021710 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 17:03:14.030881 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 17:03:14.054173 initrd-setup-root-after-ignition[1236]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:03:14.054173 initrd-setup-root-after-ignition[1236]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 17:03:14.030961 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 17:03:14.084628 initrd-setup-root-after-ignition[1240]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:03:14.054646 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:03:14.065228 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 17:03:14.075799 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 17:03:14.137720 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 17:03:14.139888 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 17:03:14.146697 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 17:03:14.154979 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 17:03:14.162159 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 17:03:14.162917 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 17:03:14.200732 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:03:14.208349 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 17:03:14.234375 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 17:03:14.238951 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:03:14.247350 systemd[1]: Stopped target timers.target - Timer Units. May 27 17:03:14.254943 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 17:03:14.255060 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:03:14.265790 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 17:03:14.269979 systemd[1]: Stopped target basic.target - Basic System. May 27 17:03:14.277523 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 17:03:14.285404 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:03:14.292617 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 17:03:14.300686 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 17:03:14.308902 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 17:03:14.316824 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:03:14.325075 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 17:03:14.333378 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 17:03:14.341729 systemd[1]: Stopped target swap.target - Swaps. May 27 17:03:14.349312 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 17:03:14.349433 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 17:03:14.361241 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 17:03:14.365704 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:03:14.374113 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 17:03:14.374186 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:03:14.383909 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 17:03:14.384012 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 17:03:14.397279 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 17:03:14.397372 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:03:14.402090 systemd[1]: ignition-files.service: Deactivated successfully. May 27 17:03:14.402168 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 17:03:14.476412 ignition[1261]: INFO : Ignition 2.21.0 May 27 17:03:14.476412 ignition[1261]: INFO : Stage: umount May 27 17:03:14.476412 ignition[1261]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:03:14.476412 ignition[1261]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:03:14.476412 ignition[1261]: INFO : umount: umount passed May 27 17:03:14.476412 ignition[1261]: INFO : Ignition finished successfully May 27 17:03:14.409683 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 27 17:03:14.409753 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 27 17:03:14.420206 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 17:03:14.444035 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 17:03:14.453801 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 17:03:14.455377 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:03:14.471914 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 17:03:14.472017 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:03:14.481137 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 17:03:14.482100 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 17:03:14.482200 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 17:03:14.493052 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 17:03:14.494220 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 17:03:14.500490 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 17:03:14.500549 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 17:03:14.507483 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 17:03:14.507524 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 17:03:14.514315 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 17:03:14.514351 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 17:03:14.522539 systemd[1]: Stopped target network.target - Network. May 27 17:03:14.530558 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 17:03:14.530638 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:03:14.538481 systemd[1]: Stopped target paths.target - Path Units. May 27 17:03:14.544811 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 17:03:14.548207 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:03:14.553426 systemd[1]: Stopped target slices.target - Slice Units. May 27 17:03:14.560693 systemd[1]: Stopped target sockets.target - Socket Units. May 27 17:03:14.567791 systemd[1]: iscsid.socket: Deactivated successfully. May 27 17:03:14.567839 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:03:14.575802 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 17:03:14.575846 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:03:14.582909 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 17:03:14.582966 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 17:03:14.589889 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 17:03:14.589916 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 17:03:14.599692 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 17:03:14.606441 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 17:03:14.623866 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 17:03:14.623996 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 17:03:14.633721 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 17:03:14.794988 kernel: hv_netvsc 00224879-2645-0022-4879-264500224879 eth0: Data path switched from VF: enP1921s1 May 27 17:03:14.633960 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 17:03:14.634070 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 17:03:14.649449 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 17:03:14.649965 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 17:03:14.657056 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 17:03:14.657110 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 17:03:14.667012 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 17:03:14.678978 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 17:03:14.679056 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:03:14.687704 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:03:14.687766 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:03:14.697784 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 17:03:14.697848 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 17:03:14.702271 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 17:03:14.702317 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:03:14.713638 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:03:14.721698 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 17:03:14.721830 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 17:03:14.743653 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 17:03:14.748009 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:03:14.756476 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 17:03:14.756521 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 17:03:14.765060 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 17:03:14.765087 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:03:14.773018 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 17:03:14.773083 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 17:03:14.790323 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 17:03:14.790394 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 17:03:14.802164 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 17:03:14.802227 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:03:14.819982 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 17:03:14.834585 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 17:03:14.834669 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:03:14.855146 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 17:03:14.855210 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:03:14.864202 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 17:03:14.864271 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:03:14.874063 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 17:03:15.028479 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). May 27 17:03:14.874121 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:03:14.879195 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:03:14.879236 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:03:14.892383 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 17:03:14.892436 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 27 17:03:14.892460 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 17:03:14.892481 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:03:14.892852 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 17:03:14.892946 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 17:03:14.901430 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 17:03:14.901501 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 17:03:14.909541 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 17:03:14.909606 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 17:03:14.924591 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 17:03:14.933085 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 17:03:14.933192 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 17:03:14.941106 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 17:03:14.960069 systemd[1]: Switching root. May 27 17:03:15.107882 systemd-journald[224]: Journal stopped May 27 17:03:20.536493 kernel: SELinux: policy capability network_peer_controls=1 May 27 17:03:20.536514 kernel: SELinux: policy capability open_perms=1 May 27 17:03:20.536522 kernel: SELinux: policy capability extended_socket_class=1 May 27 17:03:20.536527 kernel: SELinux: policy capability always_check_network=0 May 27 17:03:20.536534 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 17:03:20.536539 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 17:03:20.536545 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 17:03:20.536550 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 17:03:20.536555 kernel: SELinux: policy capability userspace_initial_context=0 May 27 17:03:20.536560 kernel: audit: type=1403 audit(1748365395.964:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 17:03:20.536567 systemd[1]: Successfully loaded SELinux policy in 139.875ms. May 27 17:03:20.536575 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.113ms. May 27 17:03:20.536582 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:03:20.536588 systemd[1]: Detected virtualization microsoft. May 27 17:03:20.536594 systemd[1]: Detected architecture arm64. May 27 17:03:20.536601 systemd[1]: Detected first boot. May 27 17:03:20.536607 systemd[1]: Hostname set to . May 27 17:03:20.536613 systemd[1]: Initializing machine ID from random generator. May 27 17:03:20.536619 zram_generator::config[1304]: No configuration found. May 27 17:03:20.536627 kernel: NET: Registered PF_VSOCK protocol family May 27 17:03:20.536633 systemd[1]: Populated /etc with preset unit settings. May 27 17:03:20.536639 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 17:03:20.536646 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 17:03:20.536652 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 17:03:20.536658 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 17:03:20.536664 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 17:03:20.536670 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 17:03:20.536676 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 17:03:20.536682 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 17:03:20.536689 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 17:03:20.536695 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 17:03:20.536701 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 17:03:20.536707 systemd[1]: Created slice user.slice - User and Session Slice. May 27 17:03:20.536713 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:03:20.536719 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:03:20.536725 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 17:03:20.536731 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 17:03:20.536737 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 17:03:20.536744 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:03:20.536750 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 27 17:03:20.536758 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:03:20.536764 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:03:20.536770 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 17:03:20.536776 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 17:03:20.536782 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 17:03:20.536789 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 17:03:20.536795 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:03:20.536801 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:03:20.536807 systemd[1]: Reached target slices.target - Slice Units. May 27 17:03:20.536813 systemd[1]: Reached target swap.target - Swaps. May 27 17:03:20.542531 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 17:03:20.542560 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 17:03:20.542576 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 17:03:20.542583 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:03:20.542590 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:03:20.542597 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:03:20.542603 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 17:03:20.542610 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 17:03:20.542618 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 17:03:20.542625 systemd[1]: Mounting media.mount - External Media Directory... May 27 17:03:20.542631 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 17:03:20.542638 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 17:03:20.542645 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 17:03:20.542652 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 17:03:20.542658 systemd[1]: Reached target machines.target - Containers. May 27 17:03:20.542665 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 17:03:20.542673 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:03:20.542679 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:03:20.542685 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 17:03:20.542692 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:03:20.542698 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:03:20.542705 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:03:20.542711 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 17:03:20.542718 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:03:20.542724 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 17:03:20.542732 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 17:03:20.542738 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 17:03:20.542744 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 17:03:20.542751 systemd[1]: Stopped systemd-fsck-usr.service. May 27 17:03:20.542757 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:03:20.542765 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:03:20.542771 kernel: fuse: init (API version 7.41) May 27 17:03:20.542777 kernel: loop: module loaded May 27 17:03:20.542784 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:03:20.542791 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:03:20.542797 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 17:03:20.542804 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 17:03:20.542878 systemd-journald[1384]: Collecting audit messages is disabled. May 27 17:03:20.542896 kernel: ACPI: bus type drm_connector registered May 27 17:03:20.542903 systemd-journald[1384]: Journal started May 27 17:03:20.542919 systemd-journald[1384]: Runtime Journal (/run/log/journal/d54d1c45054649d78d948c694c341962) is 8M, max 78.5M, 70.5M free. May 27 17:03:19.696518 systemd[1]: Queued start job for default target multi-user.target. May 27 17:03:19.702352 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 27 17:03:19.702797 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 17:03:19.703126 systemd[1]: systemd-journald.service: Consumed 2.312s CPU time. May 27 17:03:20.555647 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:03:20.561982 systemd[1]: verity-setup.service: Deactivated successfully. May 27 17:03:20.562048 systemd[1]: Stopped verity-setup.service. May 27 17:03:20.574599 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:03:20.575306 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 17:03:20.579377 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 17:03:20.583749 systemd[1]: Mounted media.mount - External Media Directory. May 27 17:03:20.587372 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 17:03:20.591753 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 17:03:20.596335 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 17:03:20.600344 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 17:03:20.605085 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:03:20.610163 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 17:03:20.610321 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 17:03:20.615317 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:03:20.615447 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:03:20.622307 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:03:20.622477 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:03:20.628011 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:03:20.628160 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:03:20.633525 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 17:03:20.633672 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 17:03:20.637948 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:03:20.638082 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:03:20.642345 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:03:20.647416 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:03:20.652337 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 17:03:20.665576 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:03:20.671134 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 17:03:20.681610 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 17:03:20.688849 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 17:03:20.688886 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:03:20.693638 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 17:03:20.699429 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 17:03:20.703435 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:03:20.721938 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 17:03:20.737833 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 17:03:20.744106 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:03:20.745073 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 17:03:20.751978 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:03:20.753490 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:03:20.764123 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 17:03:20.773014 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:03:20.782633 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 17:03:20.790583 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:03:20.795547 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 17:03:20.800517 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 17:03:20.805858 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 17:03:20.813537 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 17:03:20.820144 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 17:03:20.833853 systemd-journald[1384]: Time spent on flushing to /var/log/journal/d54d1c45054649d78d948c694c341962 is 36.934ms for 944 entries. May 27 17:03:20.833853 systemd-journald[1384]: System Journal (/var/log/journal/d54d1c45054649d78d948c694c341962) is 11.8M, max 2.6G, 2.6G free. May 27 17:03:20.961423 systemd-journald[1384]: Received client request to flush runtime journal. May 27 17:03:20.961458 kernel: loop0: detected capacity change from 0 to 107312 May 27 17:03:20.961473 systemd-journald[1384]: /var/log/journal/d54d1c45054649d78d948c694c341962/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. May 27 17:03:20.961489 systemd-journald[1384]: Rotating system journal. May 27 17:03:20.957658 systemd-tmpfiles[1443]: ACLs are not supported, ignoring. May 27 17:03:20.957667 systemd-tmpfiles[1443]: ACLs are not supported, ignoring. May 27 17:03:20.962670 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:03:20.968871 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 17:03:20.977571 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 17:03:20.990030 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:03:21.039475 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 17:03:21.040118 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 17:03:21.363858 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 17:03:21.369526 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:03:21.394388 systemd-tmpfiles[1462]: ACLs are not supported, ignoring. May 27 17:03:21.394403 systemd-tmpfiles[1462]: ACLs are not supported, ignoring. May 27 17:03:21.397785 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:03:21.455854 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 17:03:21.531843 kernel: loop1: detected capacity change from 0 to 28936 May 27 17:03:21.892856 kernel: loop2: detected capacity change from 0 to 207008 May 27 17:03:21.937851 kernel: loop3: detected capacity change from 0 to 138376 May 27 17:03:21.994677 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 17:03:22.000773 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:03:22.028542 systemd-udevd[1470]: Using default interface naming scheme 'v255'. May 27 17:03:22.346860 kernel: loop4: detected capacity change from 0 to 107312 May 27 17:03:22.353847 kernel: loop5: detected capacity change from 0 to 28936 May 27 17:03:22.359856 kernel: loop6: detected capacity change from 0 to 207008 May 27 17:03:22.366173 kernel: loop7: detected capacity change from 0 to 138376 May 27 17:03:22.369817 (sd-merge)[1472]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 27 17:03:22.370219 (sd-merge)[1472]: Merged extensions into '/usr'. May 27 17:03:22.373980 systemd[1]: Reload requested from client PID 1441 ('systemd-sysext') (unit systemd-sysext.service)... May 27 17:03:22.374114 systemd[1]: Reloading... May 27 17:03:22.432902 zram_generator::config[1497]: No configuration found. May 27 17:03:22.505197 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:03:22.636047 systemd[1]: Reloading finished in 261 ms. May 27 17:03:22.656287 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:03:22.662727 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 17:03:22.673396 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 27 17:03:22.678087 systemd[1]: Starting ensure-sysext.service... May 27 17:03:22.685022 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:03:22.691735 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:03:22.702911 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 27 17:03:22.732156 systemd-tmpfiles[1597]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 17:03:22.732302 systemd[1]: Reload requested from client PID 1594 ('systemctl') (unit ensure-sysext.service)... May 27 17:03:22.732317 systemd[1]: Reloading... May 27 17:03:22.732563 systemd-tmpfiles[1597]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 17:03:22.732881 systemd-tmpfiles[1597]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 17:03:22.733125 systemd-tmpfiles[1597]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 17:03:22.733640 systemd-tmpfiles[1597]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 17:03:22.733924 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. May 27 17:03:22.734136 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. May 27 17:03:22.750860 kernel: mousedev: PS/2 mouse device common for all mice May 27 17:03:22.773689 systemd-tmpfiles[1597]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:03:22.773702 systemd-tmpfiles[1597]: Skipping /boot May 27 17:03:22.783599 systemd-tmpfiles[1597]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:03:22.783613 systemd-tmpfiles[1597]: Skipping /boot May 27 17:03:22.882793 kernel: hv_vmbus: registering driver hv_balloon May 27 17:03:22.882908 kernel: hv_vmbus: registering driver hyperv_fb May 27 17:03:22.882930 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 27 17:03:22.886936 kernel: hv_balloon: Memory hot add disabled on ARM64 May 27 17:03:22.919843 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 27 17:03:22.929516 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 27 17:03:22.929607 zram_generator::config[1682]: No configuration found. May 27 17:03:22.929635 kernel: Console: switching to colour dummy device 80x25 May 27 17:03:22.936069 kernel: Console: switching to colour frame buffer device 128x48 May 27 17:03:23.020728 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:03:23.084011 kernel: MACsec IEEE 802.1AE May 27 17:03:23.108665 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 27 17:03:23.113587 systemd[1]: Reloading finished in 381 ms. May 27 17:03:23.120993 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:03:23.153368 systemd[1]: Finished ensure-sysext.service. May 27 17:03:23.170696 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:03:23.299542 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 17:03:23.304373 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:03:23.305465 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:03:23.315119 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:03:23.320108 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:03:23.327074 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:03:23.333109 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:03:23.334833 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 17:03:23.340072 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:03:23.341098 augenrules[1788]: No rules May 27 17:03:23.345090 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 17:03:23.354444 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:03:23.358739 systemd[1]: Reached target time-set.target - System Time Set. May 27 17:03:23.367062 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 17:03:23.380584 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 17:03:23.386036 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:03:23.393661 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:03:23.393859 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:03:23.398691 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:03:23.402994 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:03:23.410777 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:03:23.411126 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:03:23.418041 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:03:23.418398 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:03:23.423996 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:03:23.424143 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:03:23.431082 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 17:03:23.445205 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 17:03:23.458862 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 17:03:23.467970 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:03:23.468052 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:03:23.484325 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 17:03:23.602896 systemd-resolved[1794]: Positive Trust Anchors: May 27 17:03:23.603228 systemd-resolved[1794]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:03:23.603303 systemd-resolved[1794]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:03:23.605982 systemd-resolved[1794]: Using system hostname 'ci-4344.0.0-a-f939a1e004'. May 27 17:03:23.607446 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:03:23.611931 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:03:23.698925 systemd-networkd[1595]: lo: Link UP May 27 17:03:23.698935 systemd-networkd[1595]: lo: Gained carrier May 27 17:03:23.700548 systemd-networkd[1595]: Enumeration completed May 27 17:03:23.700649 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:03:23.700815 systemd-networkd[1595]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:03:23.700825 systemd-networkd[1595]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:03:23.705376 systemd[1]: Reached target network.target - Network. May 27 17:03:23.711382 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 17:03:23.718052 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 17:03:23.749919 kernel: mlx5_core 0781:00:02.0 enP1921s1: Link up May 27 17:03:23.770857 kernel: hv_netvsc 00224879-2645-0022-4879-264500224879 eth0: Data path switched to VF: enP1921s1 May 27 17:03:23.772304 systemd-networkd[1595]: enP1921s1: Link UP May 27 17:03:23.772457 systemd-networkd[1595]: eth0: Link UP May 27 17:03:23.772463 systemd-networkd[1595]: eth0: Gained carrier May 27 17:03:23.772483 systemd-networkd[1595]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:03:23.782223 systemd-networkd[1595]: enP1921s1: Gained carrier May 27 17:03:23.787637 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 17:03:23.794070 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:03:23.798944 systemd-networkd[1595]: eth0: DHCPv4 address 10.200.20.19/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 27 17:03:23.848546 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 17:03:23.853928 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 17:03:25.639959 systemd-networkd[1595]: eth0: Gained IPv6LL May 27 17:03:25.640368 systemd-networkd[1595]: enP1921s1: Gained IPv6LL May 27 17:03:25.643873 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 17:03:25.649196 systemd[1]: Reached target network-online.target - Network is Online. May 27 17:03:28.098712 ldconfig[1436]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 17:03:28.112874 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 17:03:28.119442 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 17:03:28.149789 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 17:03:28.154560 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:03:28.158633 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 17:03:28.163763 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 17:03:28.168534 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 17:03:28.172573 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 17:03:28.177491 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 17:03:28.182127 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 17:03:28.182152 systemd[1]: Reached target paths.target - Path Units. May 27 17:03:28.185662 systemd[1]: Reached target timers.target - Timer Units. May 27 17:03:28.190466 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 17:03:28.196380 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 17:03:28.201808 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 17:03:28.206793 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 17:03:28.211937 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 17:03:28.226610 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 17:03:28.231269 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 17:03:28.236444 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 17:03:28.240725 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:03:28.244238 systemd[1]: Reached target basic.target - Basic System. May 27 17:03:28.247675 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 17:03:28.247707 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 17:03:28.249938 systemd[1]: Starting chronyd.service - NTP client/server... May 27 17:03:28.262937 systemd[1]: Starting containerd.service - containerd container runtime... May 27 17:03:28.269997 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 17:03:28.276555 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 17:03:28.282978 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 17:03:28.290981 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 17:03:28.298773 (chronyd)[1830]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 27 17:03:28.299282 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 17:03:28.303290 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 17:03:28.312006 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. May 27 17:03:28.316544 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). May 27 17:03:28.317686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:03:28.326539 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 17:03:28.327137 KVP[1840]: KVP starting; pid is:1840 May 27 17:03:28.331936 jq[1838]: false May 27 17:03:28.333779 KVP[1840]: KVP LIC Version: 3.1 May 27 17:03:28.333885 kernel: hv_utils: KVP IC version 4.0 May 27 17:03:28.336699 chronyd[1846]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 27 17:03:28.337475 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 17:03:28.343991 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 17:03:28.351012 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 17:03:28.356966 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 17:03:28.365456 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 17:03:28.370937 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 17:03:28.374071 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 17:03:28.376080 systemd[1]: Starting update-engine.service - Update Engine... May 27 17:03:28.384943 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 17:03:28.389734 extend-filesystems[1839]: Found loop4 May 27 17:03:28.396233 extend-filesystems[1839]: Found loop5 May 27 17:03:28.396233 extend-filesystems[1839]: Found loop6 May 27 17:03:28.396233 extend-filesystems[1839]: Found loop7 May 27 17:03:28.396233 extend-filesystems[1839]: Found sda May 27 17:03:28.396233 extend-filesystems[1839]: Found sda1 May 27 17:03:28.396233 extend-filesystems[1839]: Found sda2 May 27 17:03:28.396233 extend-filesystems[1839]: Found sda3 May 27 17:03:28.396233 extend-filesystems[1839]: Found usr May 27 17:03:28.396233 extend-filesystems[1839]: Found sda4 May 27 17:03:28.396233 extend-filesystems[1839]: Found sda6 May 27 17:03:28.396233 extend-filesystems[1839]: Found sda7 May 27 17:03:28.396233 extend-filesystems[1839]: Found sda9 May 27 17:03:28.396233 extend-filesystems[1839]: Checking size of /dev/sda9 May 27 17:03:28.527619 extend-filesystems[1839]: Old size kept for /dev/sda9 May 27 17:03:28.527619 extend-filesystems[1839]: Found sr0 May 27 17:03:28.401106 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 17:03:28.553128 update_engine[1855]: I20250527 17:03:28.503278 1855 main.cc:92] Flatcar Update Engine starting May 27 17:03:28.408286 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 17:03:28.553638 jq[1859]: true May 27 17:03:28.409787 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 17:03:28.410994 systemd[1]: motdgen.service: Deactivated successfully. May 27 17:03:28.557381 tar[1867]: linux-arm64/LICENSE May 27 17:03:28.557381 tar[1867]: linux-arm64/helm May 27 17:03:28.411209 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 17:03:28.417095 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 17:03:28.559557 jq[1871]: true May 27 17:03:28.417256 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 17:03:28.454081 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 17:03:28.454598 (ntainerd)[1873]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 17:03:28.466546 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 17:03:28.466738 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 17:03:28.531962 systemd-logind[1853]: New seat seat0. May 27 17:03:28.534290 systemd-logind[1853]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 17:03:28.534524 systemd[1]: Started systemd-logind.service - User Login Management. May 27 17:03:28.587026 chronyd[1846]: Timezone right/UTC failed leap second check, ignoring May 27 17:03:28.587226 chronyd[1846]: Loaded seccomp filter (level 2) May 27 17:03:28.590231 systemd[1]: Started chronyd.service - NTP client/server. May 27 17:03:28.617762 bash[1917]: Updated "/home/core/.ssh/authorized_keys" May 27 17:03:28.612336 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 17:03:28.621125 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 17:03:28.648442 dbus-daemon[1833]: [system] SELinux support is enabled May 27 17:03:28.648795 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 17:03:28.654251 update_engine[1855]: I20250527 17:03:28.653977 1855 update_check_scheduler.cc:74] Next update check in 2m37s May 27 17:03:28.660897 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 17:03:28.661316 dbus-daemon[1833]: [system] Successfully activated service 'org.freedesktop.systemd1' May 27 17:03:28.660925 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 17:03:28.668178 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 17:03:28.668199 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 17:03:28.684526 systemd[1]: Started update-engine.service - Update Engine. May 27 17:03:28.697634 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 17:03:28.738498 coreos-metadata[1832]: May 27 17:03:28.737 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 27 17:03:28.742610 coreos-metadata[1832]: May 27 17:03:28.742 INFO Fetch successful May 27 17:03:28.742610 coreos-metadata[1832]: May 27 17:03:28.742 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 27 17:03:28.748179 coreos-metadata[1832]: May 27 17:03:28.747 INFO Fetch successful May 27 17:03:28.748179 coreos-metadata[1832]: May 27 17:03:28.748 INFO Fetching http://168.63.129.16/machine/fc3c3442-f3eb-449d-9904-664409131567/3f9f4839%2Db50d%2D4a2b%2D8251%2Dbffc21997ba4.%5Fci%2D4344.0.0%2Da%2Df939a1e004?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 27 17:03:28.750108 coreos-metadata[1832]: May 27 17:03:28.750 INFO Fetch successful May 27 17:03:28.750814 coreos-metadata[1832]: May 27 17:03:28.750 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 27 17:03:28.760357 coreos-metadata[1832]: May 27 17:03:28.760 INFO Fetch successful May 27 17:03:28.797569 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 17:03:28.806448 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 17:03:28.904849 sshd_keygen[1876]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 17:03:28.930237 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 17:03:28.939055 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 17:03:28.946195 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 27 17:03:28.972684 systemd[1]: issuegen.service: Deactivated successfully. May 27 17:03:28.972869 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 17:03:28.982272 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 17:03:29.011958 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 27 17:03:29.023225 locksmithd[1979]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 17:03:29.023978 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 17:03:29.033955 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 17:03:29.039336 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 27 17:03:29.046957 systemd[1]: Reached target getty.target - Login Prompts. May 27 17:03:29.141541 tar[1867]: linux-arm64/README.md May 27 17:03:29.155928 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 17:03:29.192838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:03:29.203220 (kubelet)[2027]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:03:29.276852 containerd[1873]: time="2025-05-27T17:03:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 17:03:29.278515 containerd[1873]: time="2025-05-27T17:03:29.278094812Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 17:03:29.284530 containerd[1873]: time="2025-05-27T17:03:29.284483252Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.6µs" May 27 17:03:29.284530 containerd[1873]: time="2025-05-27T17:03:29.284522052Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 17:03:29.284530 containerd[1873]: time="2025-05-27T17:03:29.284537236Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 17:03:29.284729 containerd[1873]: time="2025-05-27T17:03:29.284707300Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 17:03:29.284729 containerd[1873]: time="2025-05-27T17:03:29.284722604Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 17:03:29.284769 containerd[1873]: time="2025-05-27T17:03:29.284741612Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:03:29.284799 containerd[1873]: time="2025-05-27T17:03:29.284789732Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:03:29.284799 containerd[1873]: time="2025-05-27T17:03:29.284797556Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:03:29.285067 containerd[1873]: time="2025-05-27T17:03:29.285024812Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:03:29.285067 containerd[1873]: time="2025-05-27T17:03:29.285035524Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:03:29.285067 containerd[1873]: time="2025-05-27T17:03:29.285043204Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:03:29.285067 containerd[1873]: time="2025-05-27T17:03:29.285048612Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 17:03:29.285149 containerd[1873]: time="2025-05-27T17:03:29.285113236Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 17:03:29.285281 containerd[1873]: time="2025-05-27T17:03:29.285260972Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:03:29.285298 containerd[1873]: time="2025-05-27T17:03:29.285292260Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:03:29.285316 containerd[1873]: time="2025-05-27T17:03:29.285299180Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 17:03:29.285350 containerd[1873]: time="2025-05-27T17:03:29.285325628Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 17:03:29.285744 containerd[1873]: time="2025-05-27T17:03:29.285464772Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 17:03:29.285744 containerd[1873]: time="2025-05-27T17:03:29.285521100Z" level=info msg="metadata content store policy set" policy=shared May 27 17:03:29.298582 containerd[1873]: time="2025-05-27T17:03:29.298535836Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 17:03:29.298582 containerd[1873]: time="2025-05-27T17:03:29.298597348Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 17:03:29.298693 containerd[1873]: time="2025-05-27T17:03:29.298608452Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 17:03:29.298693 containerd[1873]: time="2025-05-27T17:03:29.298617204Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 17:03:29.298693 containerd[1873]: time="2025-05-27T17:03:29.298641676Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 17:03:29.298693 containerd[1873]: time="2025-05-27T17:03:29.298648828Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 17:03:29.298693 containerd[1873]: time="2025-05-27T17:03:29.298657228Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 17:03:29.298693 containerd[1873]: time="2025-05-27T17:03:29.298664588Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 17:03:29.298693 containerd[1873]: time="2025-05-27T17:03:29.298678652Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 17:03:29.298693 containerd[1873]: time="2025-05-27T17:03:29.298685948Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 17:03:29.298693 containerd[1873]: time="2025-05-27T17:03:29.298692212Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 17:03:29.299035 containerd[1873]: time="2025-05-27T17:03:29.298701924Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 17:03:29.299035 containerd[1873]: time="2025-05-27T17:03:29.298859476Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 17:03:29.299035 containerd[1873]: time="2025-05-27T17:03:29.298876588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 17:03:29.299035 containerd[1873]: time="2025-05-27T17:03:29.298890076Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 17:03:29.299035 containerd[1873]: time="2025-05-27T17:03:29.298898516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 17:03:29.299035 containerd[1873]: time="2025-05-27T17:03:29.298905276Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 17:03:29.299035 containerd[1873]: time="2025-05-27T17:03:29.298911668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 17:03:29.299035 containerd[1873]: time="2025-05-27T17:03:29.298919284Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 17:03:29.299035 containerd[1873]: time="2025-05-27T17:03:29.298925292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 17:03:29.299035 containerd[1873]: time="2025-05-27T17:03:29.298932860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 17:03:29.299035 containerd[1873]: time="2025-05-27T17:03:29.298939564Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 17:03:29.299035 containerd[1873]: time="2025-05-27T17:03:29.298947028Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 17:03:29.299035 containerd[1873]: time="2025-05-27T17:03:29.299006316Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 17:03:29.299035 containerd[1873]: time="2025-05-27T17:03:29.299017004Z" level=info msg="Start snapshots syncer" May 27 17:03:29.299035 containerd[1873]: time="2025-05-27T17:03:29.299036268Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 17:03:29.299880 containerd[1873]: time="2025-05-27T17:03:29.299206764Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 17:03:29.299880 containerd[1873]: time="2025-05-27T17:03:29.299240244Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 17:03:29.299965 containerd[1873]: time="2025-05-27T17:03:29.299309404Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 17:03:29.299965 containerd[1873]: time="2025-05-27T17:03:29.299415076Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 17:03:29.299965 containerd[1873]: time="2025-05-27T17:03:29.299429828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 17:03:29.299965 containerd[1873]: time="2025-05-27T17:03:29.299436692Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 17:03:29.299965 containerd[1873]: time="2025-05-27T17:03:29.299444724Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 17:03:29.299965 containerd[1873]: time="2025-05-27T17:03:29.299452524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 17:03:29.299965 containerd[1873]: time="2025-05-27T17:03:29.299459532Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 17:03:29.299965 containerd[1873]: time="2025-05-27T17:03:29.299467652Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 17:03:29.299965 containerd[1873]: time="2025-05-27T17:03:29.299488660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 17:03:29.299965 containerd[1873]: time="2025-05-27T17:03:29.299495708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 17:03:29.299965 containerd[1873]: time="2025-05-27T17:03:29.299501892Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 17:03:29.299965 containerd[1873]: time="2025-05-27T17:03:29.299526828Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:03:29.299965 containerd[1873]: time="2025-05-27T17:03:29.299536372Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:03:29.299965 containerd[1873]: time="2025-05-27T17:03:29.299542012Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:03:29.300143 containerd[1873]: time="2025-05-27T17:03:29.299547444Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:03:29.300143 containerd[1873]: time="2025-05-27T17:03:29.299551932Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 17:03:29.300143 containerd[1873]: time="2025-05-27T17:03:29.299558332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 17:03:29.300143 containerd[1873]: time="2025-05-27T17:03:29.299565340Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 17:03:29.300143 containerd[1873]: time="2025-05-27T17:03:29.299577940Z" level=info msg="runtime interface created" May 27 17:03:29.300143 containerd[1873]: time="2025-05-27T17:03:29.299581324Z" level=info msg="created NRI interface" May 27 17:03:29.300143 containerd[1873]: time="2025-05-27T17:03:29.299586580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 17:03:29.300143 containerd[1873]: time="2025-05-27T17:03:29.299595252Z" level=info msg="Connect containerd service" May 27 17:03:29.300143 containerd[1873]: time="2025-05-27T17:03:29.299614996Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 17:03:29.300471 containerd[1873]: time="2025-05-27T17:03:29.300444132Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:03:29.445683 kubelet[2027]: E0527 17:03:29.445545 2027 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:03:29.447612 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:03:29.447726 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:03:29.449906 systemd[1]: kubelet.service: Consumed 555ms CPU time, 253.7M memory peak. May 27 17:03:30.257611 containerd[1873]: time="2025-05-27T17:03:30.257448676Z" level=info msg="Start subscribing containerd event" May 27 17:03:30.257611 containerd[1873]: time="2025-05-27T17:03:30.257529436Z" level=info msg="Start recovering state" May 27 17:03:30.257611 containerd[1873]: time="2025-05-27T17:03:30.257624540Z" level=info msg="Start event monitor" May 27 17:03:30.257779 containerd[1873]: time="2025-05-27T17:03:30.257636764Z" level=info msg="Start cni network conf syncer for default" May 27 17:03:30.257779 containerd[1873]: time="2025-05-27T17:03:30.257642140Z" level=info msg="Start streaming server" May 27 17:03:30.257779 containerd[1873]: time="2025-05-27T17:03:30.257648364Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 17:03:30.257779 containerd[1873]: time="2025-05-27T17:03:30.257660196Z" level=info msg="runtime interface starting up..." May 27 17:03:30.257779 containerd[1873]: time="2025-05-27T17:03:30.257664828Z" level=info msg="starting plugins..." May 27 17:03:30.257779 containerd[1873]: time="2025-05-27T17:03:30.257676540Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 17:03:30.259836 containerd[1873]: time="2025-05-27T17:03:30.257966172Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 17:03:30.259836 containerd[1873]: time="2025-05-27T17:03:30.258018076Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 17:03:30.259836 containerd[1873]: time="2025-05-27T17:03:30.258065308Z" level=info msg="containerd successfully booted in 0.982315s" May 27 17:03:30.258230 systemd[1]: Started containerd.service - containerd container runtime. May 27 17:03:30.263635 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 17:03:30.269745 systemd[1]: Startup finished in 1.649s (kernel) + 12.439s (initrd) + 14.443s (userspace) = 28.532s. May 27 17:03:30.528598 login[2013]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying May 27 17:03:30.528790 login[2012]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:30.543659 systemd-logind[1853]: New session 2 of user core. May 27 17:03:30.544779 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 17:03:30.546386 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 17:03:30.566876 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 17:03:30.568896 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 17:03:30.607171 (systemd)[2053]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 17:03:30.609656 systemd-logind[1853]: New session c1 of user core. May 27 17:03:30.868014 systemd[2053]: Queued start job for default target default.target. May 27 17:03:30.875670 systemd[2053]: Created slice app.slice - User Application Slice. May 27 17:03:30.875695 systemd[2053]: Reached target paths.target - Paths. May 27 17:03:30.875726 systemd[2053]: Reached target timers.target - Timers. May 27 17:03:30.876839 systemd[2053]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 17:03:30.884766 systemd[2053]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 17:03:30.885252 systemd[2053]: Reached target sockets.target - Sockets. May 27 17:03:30.885300 systemd[2053]: Reached target basic.target - Basic System. May 27 17:03:30.885320 systemd[2053]: Reached target default.target - Main User Target. May 27 17:03:30.885341 systemd[2053]: Startup finished in 270ms. May 27 17:03:30.885683 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 17:03:30.896985 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 17:03:31.107252 waagent[2010]: 2025-05-27T17:03:31.102835Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 May 27 17:03:31.107792 waagent[2010]: 2025-05-27T17:03:31.107738Z INFO Daemon Daemon OS: flatcar 4344.0.0 May 27 17:03:31.111061 waagent[2010]: 2025-05-27T17:03:31.111010Z INFO Daemon Daemon Python: 3.11.12 May 27 17:03:31.114154 waagent[2010]: 2025-05-27T17:03:31.114099Z INFO Daemon Daemon Run daemon May 27 17:03:31.117081 waagent[2010]: 2025-05-27T17:03:31.117036Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.0.0' May 27 17:03:31.123413 waagent[2010]: 2025-05-27T17:03:31.123304Z INFO Daemon Daemon Using waagent for provisioning May 27 17:03:31.127338 waagent[2010]: 2025-05-27T17:03:31.127284Z INFO Daemon Daemon Activate resource disk May 27 17:03:31.130608 waagent[2010]: 2025-05-27T17:03:31.130560Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 27 17:03:31.138482 waagent[2010]: 2025-05-27T17:03:31.138424Z INFO Daemon Daemon Found device: None May 27 17:03:31.141613 waagent[2010]: 2025-05-27T17:03:31.141562Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 27 17:03:31.147535 waagent[2010]: 2025-05-27T17:03:31.147485Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 27 17:03:31.155553 waagent[2010]: 2025-05-27T17:03:31.155501Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 27 17:03:31.159693 waagent[2010]: 2025-05-27T17:03:31.159648Z INFO Daemon Daemon Running default provisioning handler May 27 17:03:31.168853 waagent[2010]: 2025-05-27T17:03:31.168794Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 27 17:03:31.178373 waagent[2010]: 2025-05-27T17:03:31.178321Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 27 17:03:31.185333 waagent[2010]: 2025-05-27T17:03:31.185279Z INFO Daemon Daemon cloud-init is enabled: False May 27 17:03:31.188798 waagent[2010]: 2025-05-27T17:03:31.188748Z INFO Daemon Daemon Copying ovf-env.xml May 27 17:03:31.318807 waagent[2010]: 2025-05-27T17:03:31.318735Z INFO Daemon Daemon Successfully mounted dvd May 27 17:03:31.352466 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 27 17:03:31.354722 waagent[2010]: 2025-05-27T17:03:31.354662Z INFO Daemon Daemon Detect protocol endpoint May 27 17:03:31.358259 waagent[2010]: 2025-05-27T17:03:31.358195Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 27 17:03:31.362383 waagent[2010]: 2025-05-27T17:03:31.362325Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 27 17:03:31.366880 waagent[2010]: 2025-05-27T17:03:31.366777Z INFO Daemon Daemon Test for route to 168.63.129.16 May 27 17:03:31.370743 waagent[2010]: 2025-05-27T17:03:31.370698Z INFO Daemon Daemon Route to 168.63.129.16 exists May 27 17:03:31.374211 waagent[2010]: 2025-05-27T17:03:31.374105Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 27 17:03:31.436345 waagent[2010]: 2025-05-27T17:03:31.436300Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 27 17:03:31.441106 waagent[2010]: 2025-05-27T17:03:31.441073Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 27 17:03:31.444857 waagent[2010]: 2025-05-27T17:03:31.444798Z INFO Daemon Daemon Server preferred version:2015-04-05 May 27 17:03:31.530348 login[2013]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:31.536630 systemd-logind[1853]: New session 1 of user core. May 27 17:03:31.540110 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 17:03:31.603719 waagent[2010]: 2025-05-27T17:03:31.603626Z INFO Daemon Daemon Initializing goal state during protocol detection May 27 17:03:31.608876 waagent[2010]: 2025-05-27T17:03:31.608807Z INFO Daemon Daemon Forcing an update of the goal state. May 27 17:03:31.628869 waagent[2010]: 2025-05-27T17:03:31.628764Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 27 17:03:31.665600 waagent[2010]: 2025-05-27T17:03:31.665560Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 27 17:03:31.670096 waagent[2010]: 2025-05-27T17:03:31.670054Z INFO Daemon May 27 17:03:31.672128 waagent[2010]: 2025-05-27T17:03:31.672096Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 726e5ef8-c5a9-4850-8710-b63b69037df2 eTag: 10973329294223712585 source: Fabric] May 27 17:03:31.680131 waagent[2010]: 2025-05-27T17:03:31.680097Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 27 17:03:31.684809 waagent[2010]: 2025-05-27T17:03:31.684776Z INFO Daemon May 27 17:03:31.686887 waagent[2010]: 2025-05-27T17:03:31.686858Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 27 17:03:31.695691 waagent[2010]: 2025-05-27T17:03:31.695659Z INFO Daemon Daemon Downloading artifacts profile blob May 27 17:03:31.832192 waagent[2010]: 2025-05-27T17:03:31.832114Z INFO Daemon Downloaded certificate {'thumbprint': '65F08063B708AD01F0763A94B3A48A418EC74D82', 'hasPrivateKey': True} May 27 17:03:31.839219 waagent[2010]: 2025-05-27T17:03:31.839175Z INFO Daemon Downloaded certificate {'thumbprint': '4DE29FC6AA941CD1EE03ECC49B9F340952909814', 'hasPrivateKey': False} May 27 17:03:31.846260 waagent[2010]: 2025-05-27T17:03:31.846211Z INFO Daemon Fetch goal state completed May 27 17:03:31.885419 waagent[2010]: 2025-05-27T17:03:31.885317Z INFO Daemon Daemon Starting provisioning May 27 17:03:31.889335 waagent[2010]: 2025-05-27T17:03:31.889271Z INFO Daemon Daemon Handle ovf-env.xml. May 27 17:03:31.892952 waagent[2010]: 2025-05-27T17:03:31.892916Z INFO Daemon Daemon Set hostname [ci-4344.0.0-a-f939a1e004] May 27 17:03:31.933774 waagent[2010]: 2025-05-27T17:03:31.933711Z INFO Daemon Daemon Publish hostname [ci-4344.0.0-a-f939a1e004] May 27 17:03:31.938506 waagent[2010]: 2025-05-27T17:03:31.938448Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 27 17:03:31.942983 waagent[2010]: 2025-05-27T17:03:31.942942Z INFO Daemon Daemon Primary interface is [eth0] May 27 17:03:31.952500 systemd-networkd[1595]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:03:31.952508 systemd-networkd[1595]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:03:31.952539 systemd-networkd[1595]: eth0: DHCP lease lost May 27 17:03:31.953578 waagent[2010]: 2025-05-27T17:03:31.953504Z INFO Daemon Daemon Create user account if not exists May 27 17:03:31.957656 waagent[2010]: 2025-05-27T17:03:31.957607Z INFO Daemon Daemon User core already exists, skip useradd May 27 17:03:31.961779 waagent[2010]: 2025-05-27T17:03:31.961725Z INFO Daemon Daemon Configure sudoer May 27 17:03:31.979862 systemd-networkd[1595]: eth0: DHCPv4 address 10.200.20.19/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 27 17:03:31.981803 waagent[2010]: 2025-05-27T17:03:31.981733Z INFO Daemon Daemon Configure sshd May 27 17:03:31.988754 waagent[2010]: 2025-05-27T17:03:31.988685Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 27 17:03:31.997541 waagent[2010]: 2025-05-27T17:03:31.997495Z INFO Daemon Daemon Deploy ssh public key. May 27 17:03:33.118762 waagent[2010]: 2025-05-27T17:03:33.118711Z INFO Daemon Daemon Provisioning complete May 27 17:03:33.133310 waagent[2010]: 2025-05-27T17:03:33.133257Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 27 17:03:33.137951 waagent[2010]: 2025-05-27T17:03:33.137901Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 27 17:03:33.145065 waagent[2010]: 2025-05-27T17:03:33.145008Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent May 27 17:03:33.246659 waagent[2107]: 2025-05-27T17:03:33.246585Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) May 27 17:03:33.247487 waagent[2107]: 2025-05-27T17:03:33.247119Z INFO ExtHandler ExtHandler OS: flatcar 4344.0.0 May 27 17:03:33.247487 waagent[2107]: 2025-05-27T17:03:33.247179Z INFO ExtHandler ExtHandler Python: 3.11.12 May 27 17:03:33.247487 waagent[2107]: 2025-05-27T17:03:33.247215Z INFO ExtHandler ExtHandler CPU Arch: aarch64 May 27 17:03:33.269065 waagent[2107]: 2025-05-27T17:03:33.269002Z INFO ExtHandler ExtHandler Distro: flatcar-4344.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; May 27 17:03:33.269362 waagent[2107]: 2025-05-27T17:03:33.269332Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 27 17:03:33.269502 waagent[2107]: 2025-05-27T17:03:33.269478Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 27 17:03:33.276114 waagent[2107]: 2025-05-27T17:03:33.276051Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 27 17:03:33.282286 waagent[2107]: 2025-05-27T17:03:33.282241Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 27 17:03:33.282915 waagent[2107]: 2025-05-27T17:03:33.282881Z INFO ExtHandler May 27 17:03:33.283057 waagent[2107]: 2025-05-27T17:03:33.283035Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 7e2dceb0-522a-4fca-a79c-71528a62a94d eTag: 10973329294223712585 source: Fabric] May 27 17:03:33.283381 waagent[2107]: 2025-05-27T17:03:33.283354Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 27 17:03:33.284463 waagent[2107]: 2025-05-27T17:03:33.283908Z INFO ExtHandler May 27 17:03:33.284463 waagent[2107]: 2025-05-27T17:03:33.283956Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 27 17:03:33.288012 waagent[2107]: 2025-05-27T17:03:33.287987Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 27 17:03:33.350404 waagent[2107]: 2025-05-27T17:03:33.350332Z INFO ExtHandler Downloaded certificate {'thumbprint': '65F08063B708AD01F0763A94B3A48A418EC74D82', 'hasPrivateKey': True} May 27 17:03:33.350951 waagent[2107]: 2025-05-27T17:03:33.350909Z INFO ExtHandler Downloaded certificate {'thumbprint': '4DE29FC6AA941CD1EE03ECC49B9F340952909814', 'hasPrivateKey': False} May 27 17:03:33.351409 waagent[2107]: 2025-05-27T17:03:33.351371Z INFO ExtHandler Fetch goal state completed May 27 17:03:33.364121 waagent[2107]: 2025-05-27T17:03:33.364075Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) May 27 17:03:33.368078 waagent[2107]: 2025-05-27T17:03:33.368019Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2107 May 27 17:03:33.368330 waagent[2107]: 2025-05-27T17:03:33.368299Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 27 17:03:33.368715 waagent[2107]: 2025-05-27T17:03:33.368683Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** May 27 17:03:33.370027 waagent[2107]: 2025-05-27T17:03:33.369946Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.0.0', '', 'Flatcar Container Linux by Kinvolk'] May 27 17:03:33.370445 waagent[2107]: 2025-05-27T17:03:33.370411Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported May 27 17:03:33.370646 waagent[2107]: 2025-05-27T17:03:33.370619Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 27 17:03:33.371213 waagent[2107]: 2025-05-27T17:03:33.371178Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 27 17:03:33.410846 waagent[2107]: 2025-05-27T17:03:33.410797Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 27 17:03:33.411182 waagent[2107]: 2025-05-27T17:03:33.411148Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 27 17:03:33.416531 waagent[2107]: 2025-05-27T17:03:33.416501Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 27 17:03:33.422020 systemd[1]: Reload requested from client PID 2124 ('systemctl') (unit waagent.service)... May 27 17:03:33.422033 systemd[1]: Reloading... May 27 17:03:33.506859 zram_generator::config[2163]: No configuration found. May 27 17:03:33.579858 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:03:33.664203 systemd[1]: Reloading finished in 241 ms. May 27 17:03:33.678704 waagent[2107]: 2025-05-27T17:03:33.678024Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 27 17:03:33.678704 waagent[2107]: 2025-05-27T17:03:33.678170Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 27 17:03:35.079860 waagent[2107]: 2025-05-27T17:03:35.079617Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 27 17:03:35.080181 waagent[2107]: 2025-05-27T17:03:35.079981Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 27 17:03:35.080773 waagent[2107]: 2025-05-27T17:03:35.080700Z INFO ExtHandler ExtHandler Starting env monitor service. May 27 17:03:35.080915 waagent[2107]: 2025-05-27T17:03:35.080831Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 27 17:03:35.081229 waagent[2107]: 2025-05-27T17:03:35.081187Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 27 17:03:35.081276 waagent[2107]: 2025-05-27T17:03:35.081240Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 27 17:03:35.081656 waagent[2107]: 2025-05-27T17:03:35.081621Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 27 17:03:35.081925 waagent[2107]: 2025-05-27T17:03:35.081808Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 27 17:03:35.081925 waagent[2107]: 2025-05-27T17:03:35.081864Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 27 17:03:35.081999 waagent[2107]: 2025-05-27T17:03:35.081928Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 27 17:03:35.082013 waagent[2107]: 2025-05-27T17:03:35.081996Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 27 17:03:35.082114 waagent[2107]: 2025-05-27T17:03:35.082092Z INFO EnvHandler ExtHandler Configure routes May 27 17:03:35.082152 waagent[2107]: 2025-05-27T17:03:35.082135Z INFO EnvHandler ExtHandler Gateway:None May 27 17:03:35.082170 waagent[2107]: 2025-05-27T17:03:35.082161Z INFO EnvHandler ExtHandler Routes:None May 27 17:03:35.082659 waagent[2107]: 2025-05-27T17:03:35.082623Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 27 17:03:35.082659 waagent[2107]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 27 17:03:35.082659 waagent[2107]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 27 17:03:35.082659 waagent[2107]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 27 17:03:35.082659 waagent[2107]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 27 17:03:35.082659 waagent[2107]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 27 17:03:35.082659 waagent[2107]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 27 17:03:35.083313 waagent[2107]: 2025-05-27T17:03:35.083205Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 27 17:03:35.083313 waagent[2107]: 2025-05-27T17:03:35.083256Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 27 17:03:35.083852 waagent[2107]: 2025-05-27T17:03:35.083687Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 27 17:03:35.090446 waagent[2107]: 2025-05-27T17:03:35.089020Z INFO ExtHandler ExtHandler May 27 17:03:35.090446 waagent[2107]: 2025-05-27T17:03:35.089097Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a0cfc25b-825f-4152-9e63-8f42a05d3419 correlation c7ea275a-d170-41ae-b693-a1df74cecf3f created: 2025-05-27T17:02:12.606705Z] May 27 17:03:35.090446 waagent[2107]: 2025-05-27T17:03:35.089380Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 27 17:03:35.090446 waagent[2107]: 2025-05-27T17:03:35.089800Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] May 27 17:03:35.170103 waagent[2107]: 2025-05-27T17:03:35.170028Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command May 27 17:03:35.170103 waagent[2107]: Try `iptables -h' or 'iptables --help' for more information.) May 27 17:03:35.170521 waagent[2107]: 2025-05-27T17:03:35.170485Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: ACE10F1B-3F0E-402F-9B4D-E01CD9511B92;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] May 27 17:03:35.235156 waagent[2107]: 2025-05-27T17:03:35.235080Z INFO MonitorHandler ExtHandler Network interfaces: May 27 17:03:35.235156 waagent[2107]: Executing ['ip', '-a', '-o', 'link']: May 27 17:03:35.235156 waagent[2107]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 27 17:03:35.235156 waagent[2107]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:79:26:45 brd ff:ff:ff:ff:ff:ff May 27 17:03:35.235156 waagent[2107]: 3: enP1921s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:79:26:45 brd ff:ff:ff:ff:ff:ff\ altname enP1921p0s2 May 27 17:03:35.235156 waagent[2107]: Executing ['ip', '-4', '-a', '-o', 'address']: May 27 17:03:35.235156 waagent[2107]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 27 17:03:35.235156 waagent[2107]: 2: eth0 inet 10.200.20.19/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 27 17:03:35.235156 waagent[2107]: Executing ['ip', '-6', '-a', '-o', 'address']: May 27 17:03:35.235156 waagent[2107]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 27 17:03:35.235156 waagent[2107]: 2: eth0 inet6 fe80::222:48ff:fe79:2645/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 27 17:03:35.235156 waagent[2107]: 3: enP1921s1 inet6 fe80::222:48ff:fe79:2645/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 27 17:03:35.266858 waagent[2107]: 2025-05-27T17:03:35.266749Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: May 27 17:03:35.266858 waagent[2107]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 27 17:03:35.266858 waagent[2107]: pkts bytes target prot opt in out source destination May 27 17:03:35.266858 waagent[2107]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 27 17:03:35.266858 waagent[2107]: pkts bytes target prot opt in out source destination May 27 17:03:35.266858 waagent[2107]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 27 17:03:35.266858 waagent[2107]: pkts bytes target prot opt in out source destination May 27 17:03:35.266858 waagent[2107]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 27 17:03:35.266858 waagent[2107]: 4 416 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 27 17:03:35.266858 waagent[2107]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 27 17:03:35.270338 waagent[2107]: 2025-05-27T17:03:35.270281Z INFO EnvHandler ExtHandler Current Firewall rules: May 27 17:03:35.270338 waagent[2107]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 27 17:03:35.270338 waagent[2107]: pkts bytes target prot opt in out source destination May 27 17:03:35.270338 waagent[2107]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 27 17:03:35.270338 waagent[2107]: pkts bytes target prot opt in out source destination May 27 17:03:35.270338 waagent[2107]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 27 17:03:35.270338 waagent[2107]: pkts bytes target prot opt in out source destination May 27 17:03:35.270338 waagent[2107]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 27 17:03:35.270338 waagent[2107]: 10 868 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 27 17:03:35.270338 waagent[2107]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 27 17:03:35.270597 waagent[2107]: 2025-05-27T17:03:35.270545Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 27 17:03:39.698413 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 17:03:39.699940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:03:39.808524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:03:39.817310 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:03:39.921228 kubelet[2257]: E0527 17:03:39.921168 2257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:03:39.924125 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:03:39.924247 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:03:39.924525 systemd[1]: kubelet.service: Consumed 119ms CPU time, 107.7M memory peak. May 27 17:03:50.174915 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 17:03:50.176428 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:03:50.281687 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:03:50.286155 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:03:50.415300 kubelet[2271]: E0527 17:03:50.415240 2271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:03:50.417634 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:03:50.417933 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:03:50.418484 systemd[1]: kubelet.service: Consumed 113ms CPU time, 105.4M memory peak. May 27 17:03:52.378360 chronyd[1846]: Selected source PHC0 May 27 17:04:00.160901 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 17:04:00.162854 systemd[1]: Started sshd@0-10.200.20.19:22-10.200.16.10:50630.service - OpenSSH per-connection server daemon (10.200.16.10:50630). May 27 17:04:00.574714 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 27 17:04:00.576543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:04:00.784614 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:04:00.787503 (kubelet)[2289]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:04:00.821161 kubelet[2289]: E0527 17:04:00.821086 2289 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:04:00.824164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:04:00.824428 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:04:00.824949 systemd[1]: kubelet.service: Consumed 110ms CPU time, 107M memory peak. May 27 17:04:00.844842 sshd[2279]: Accepted publickey for core from 10.200.16.10 port 50630 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:04:00.846008 sshd-session[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:04:00.850342 systemd-logind[1853]: New session 3 of user core. May 27 17:04:00.859980 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 17:04:01.277144 systemd[1]: Started sshd@1-10.200.20.19:22-10.200.16.10:50634.service - OpenSSH per-connection server daemon (10.200.16.10:50634). May 27 17:04:01.761668 sshd[2299]: Accepted publickey for core from 10.200.16.10 port 50634 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:04:01.762875 sshd-session[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:04:01.766872 systemd-logind[1853]: New session 4 of user core. May 27 17:04:01.779178 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 17:04:02.116272 sshd[2301]: Connection closed by 10.200.16.10 port 50634 May 27 17:04:02.115586 sshd-session[2299]: pam_unix(sshd:session): session closed for user core May 27 17:04:02.119076 systemd-logind[1853]: Session 4 logged out. Waiting for processes to exit. May 27 17:04:02.119707 systemd[1]: sshd@1-10.200.20.19:22-10.200.16.10:50634.service: Deactivated successfully. May 27 17:04:02.121281 systemd[1]: session-4.scope: Deactivated successfully. May 27 17:04:02.123125 systemd-logind[1853]: Removed session 4. May 27 17:04:02.206743 systemd[1]: Started sshd@2-10.200.20.19:22-10.200.16.10:50648.service - OpenSSH per-connection server daemon (10.200.16.10:50648). May 27 17:04:02.694386 sshd[2307]: Accepted publickey for core from 10.200.16.10 port 50648 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:04:02.695621 sshd-session[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:04:02.699808 systemd-logind[1853]: New session 5 of user core. May 27 17:04:02.706011 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 17:04:03.050560 sshd[2309]: Connection closed by 10.200.16.10 port 50648 May 27 17:04:03.049993 sshd-session[2307]: pam_unix(sshd:session): session closed for user core May 27 17:04:03.053348 systemd[1]: sshd@2-10.200.20.19:22-10.200.16.10:50648.service: Deactivated successfully. May 27 17:04:03.054784 systemd[1]: session-5.scope: Deactivated successfully. May 27 17:04:03.055433 systemd-logind[1853]: Session 5 logged out. Waiting for processes to exit. May 27 17:04:03.056678 systemd-logind[1853]: Removed session 5. May 27 17:04:03.139816 systemd[1]: Started sshd@3-10.200.20.19:22-10.200.16.10:50652.service - OpenSSH per-connection server daemon (10.200.16.10:50652). May 27 17:04:03.590171 sshd[2315]: Accepted publickey for core from 10.200.16.10 port 50652 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:04:03.593157 sshd-session[2315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:04:03.597186 systemd-logind[1853]: New session 6 of user core. May 27 17:04:03.603985 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 17:04:03.915703 sshd[2317]: Connection closed by 10.200.16.10 port 50652 May 27 17:04:03.916380 sshd-session[2315]: pam_unix(sshd:session): session closed for user core May 27 17:04:03.919555 systemd[1]: sshd@3-10.200.20.19:22-10.200.16.10:50652.service: Deactivated successfully. May 27 17:04:03.921404 systemd[1]: session-6.scope: Deactivated successfully. May 27 17:04:03.922165 systemd-logind[1853]: Session 6 logged out. Waiting for processes to exit. May 27 17:04:03.923558 systemd-logind[1853]: Removed session 6. May 27 17:04:04.001814 systemd[1]: Started sshd@4-10.200.20.19:22-10.200.16.10:50656.service - OpenSSH per-connection server daemon (10.200.16.10:50656). May 27 17:04:04.484383 sshd[2323]: Accepted publickey for core from 10.200.16.10 port 50656 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:04:04.485576 sshd-session[2323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:04:04.489645 systemd-logind[1853]: New session 7 of user core. May 27 17:04:04.493982 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 17:04:04.885625 sudo[2326]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 17:04:04.885873 sudo[2326]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:04:04.917051 sudo[2326]: pam_unix(sudo:session): session closed for user root May 27 17:04:04.992937 sshd[2325]: Connection closed by 10.200.16.10 port 50656 May 27 17:04:04.993679 sshd-session[2323]: pam_unix(sshd:session): session closed for user core May 27 17:04:04.997583 systemd[1]: sshd@4-10.200.20.19:22-10.200.16.10:50656.service: Deactivated successfully. May 27 17:04:04.999087 systemd[1]: session-7.scope: Deactivated successfully. May 27 17:04:04.999754 systemd-logind[1853]: Session 7 logged out. Waiting for processes to exit. May 27 17:04:05.001213 systemd-logind[1853]: Removed session 7. May 27 17:04:05.078083 systemd[1]: Started sshd@5-10.200.20.19:22-10.200.16.10:50662.service - OpenSSH per-connection server daemon (10.200.16.10:50662). May 27 17:04:05.531245 sshd[2332]: Accepted publickey for core from 10.200.16.10 port 50662 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:04:05.532476 sshd-session[2332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:04:05.536791 systemd-logind[1853]: New session 8 of user core. May 27 17:04:05.543025 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 17:04:05.784563 sudo[2336]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 17:04:05.784796 sudo[2336]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:04:05.791796 sudo[2336]: pam_unix(sudo:session): session closed for user root May 27 17:04:05.795940 sudo[2335]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 17:04:05.796533 sudo[2335]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:04:05.804219 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:04:05.836989 augenrules[2358]: No rules May 27 17:04:05.838302 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:04:05.838496 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:04:05.839805 sudo[2335]: pam_unix(sudo:session): session closed for user root May 27 17:04:05.927540 sshd[2334]: Connection closed by 10.200.16.10 port 50662 May 27 17:04:05.928112 sshd-session[2332]: pam_unix(sshd:session): session closed for user core May 27 17:04:05.930923 systemd-logind[1853]: Session 8 logged out. Waiting for processes to exit. May 27 17:04:05.932627 systemd[1]: sshd@5-10.200.20.19:22-10.200.16.10:50662.service: Deactivated successfully. May 27 17:04:05.934661 systemd[1]: session-8.scope: Deactivated successfully. May 27 17:04:05.936797 systemd-logind[1853]: Removed session 8. May 27 17:04:06.019878 systemd[1]: Started sshd@6-10.200.20.19:22-10.200.16.10:50670.service - OpenSSH per-connection server daemon (10.200.16.10:50670). May 27 17:04:06.507562 sshd[2367]: Accepted publickey for core from 10.200.16.10 port 50670 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:04:06.508741 sshd-session[2367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:04:06.512973 systemd-logind[1853]: New session 9 of user core. May 27 17:04:06.520153 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 17:04:06.778095 sudo[2370]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 17:04:06.778334 sudo[2370]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:04:08.605469 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 17:04:08.617143 (dockerd)[2389]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 17:04:09.679856 dockerd[2389]: time="2025-05-27T17:04:09.679143965Z" level=info msg="Starting up" May 27 17:04:09.681273 dockerd[2389]: time="2025-05-27T17:04:09.681215538Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 17:04:09.833779 dockerd[2389]: time="2025-05-27T17:04:09.833360747Z" level=info msg="Loading containers: start." May 27 17:04:09.862913 kernel: Initializing XFRM netlink socket May 27 17:04:10.297529 systemd-networkd[1595]: docker0: Link UP May 27 17:04:10.320220 dockerd[2389]: time="2025-05-27T17:04:10.320163136Z" level=info msg="Loading containers: done." May 27 17:04:10.329660 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1844374791-merged.mount: Deactivated successfully. May 27 17:04:10.368316 dockerd[2389]: time="2025-05-27T17:04:10.368260169Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 17:04:10.368493 dockerd[2389]: time="2025-05-27T17:04:10.368360470Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 17:04:10.368493 dockerd[2389]: time="2025-05-27T17:04:10.368482155Z" level=info msg="Initializing buildkit" May 27 17:04:10.419312 dockerd[2389]: time="2025-05-27T17:04:10.419259531Z" level=info msg="Completed buildkit initialization" May 27 17:04:10.426311 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 17:04:10.426462 dockerd[2389]: time="2025-05-27T17:04:10.425984535Z" level=info msg="Daemon has completed initialization" May 27 17:04:10.426462 dockerd[2389]: time="2025-05-27T17:04:10.426394962Z" level=info msg="API listen on /run/docker.sock" May 27 17:04:11.000598 kernel: hv_balloon: Max. dynamic memory size: 4096 MB May 27 17:04:11.074630 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 27 17:04:11.076030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:04:11.181098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:04:11.187107 (kubelet)[2597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:04:11.218833 kubelet[2597]: E0527 17:04:11.218785 2597 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:04:11.221213 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:04:11.221331 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:04:11.222048 systemd[1]: kubelet.service: Consumed 111ms CPU time, 107M memory peak. May 27 17:04:11.257153 containerd[1873]: time="2025-05-27T17:04:11.257004203Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 27 17:04:12.363425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3243518252.mount: Deactivated successfully. May 27 17:04:13.370263 containerd[1873]: time="2025-05-27T17:04:13.370158738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:13.373834 containerd[1873]: time="2025-05-27T17:04:13.373762107Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=26326311" May 27 17:04:13.377749 containerd[1873]: time="2025-05-27T17:04:13.377691746Z" level=info msg="ImageCreate event name:\"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:13.383361 containerd[1873]: time="2025-05-27T17:04:13.383284300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:13.384358 containerd[1873]: time="2025-05-27T17:04:13.383952097Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"26323111\" in 2.1269071s" May 27 17:04:13.384358 containerd[1873]: time="2025-05-27T17:04:13.383988739Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\"" May 27 17:04:13.384718 containerd[1873]: time="2025-05-27T17:04:13.384678218Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 27 17:04:13.741078 update_engine[1855]: I20250527 17:04:13.740734 1855 update_attempter.cc:509] Updating boot flags... May 27 17:04:14.928371 containerd[1873]: time="2025-05-27T17:04:14.928321573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:14.933289 containerd[1873]: time="2025-05-27T17:04:14.933112674Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=22530547" May 27 17:04:14.937996 containerd[1873]: time="2025-05-27T17:04:14.937976099Z" level=info msg="ImageCreate event name:\"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:14.946401 containerd[1873]: time="2025-05-27T17:04:14.946349165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:14.946992 containerd[1873]: time="2025-05-27T17:04:14.946869490Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"24066313\" in 1.562072075s" May 27 17:04:14.946992 containerd[1873]: time="2025-05-27T17:04:14.946901804Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\"" May 27 17:04:14.947382 containerd[1873]: time="2025-05-27T17:04:14.947346606Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 27 17:04:16.099378 containerd[1873]: time="2025-05-27T17:04:16.099324683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:16.102839 containerd[1873]: time="2025-05-27T17:04:16.102747401Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=17484190" May 27 17:04:16.108171 containerd[1873]: time="2025-05-27T17:04:16.108120951Z" level=info msg="ImageCreate event name:\"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:16.115780 containerd[1873]: time="2025-05-27T17:04:16.115707232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:16.116492 containerd[1873]: time="2025-05-27T17:04:16.116349170Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"19019974\" in 1.168974315s" May 27 17:04:16.116492 containerd[1873]: time="2025-05-27T17:04:16.116383892Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\"" May 27 17:04:16.117056 containerd[1873]: time="2025-05-27T17:04:16.116928386Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 27 17:04:17.637440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3969908415.mount: Deactivated successfully. May 27 17:04:17.944946 containerd[1873]: time="2025-05-27T17:04:17.944409126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:17.946964 containerd[1873]: time="2025-05-27T17:04:17.946920156Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=27377375" May 27 17:04:17.950499 containerd[1873]: time="2025-05-27T17:04:17.950442035Z" level=info msg="ImageCreate event name:\"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:17.954282 containerd[1873]: time="2025-05-27T17:04:17.954218757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:17.954860 containerd[1873]: time="2025-05-27T17:04:17.954447854Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"27376394\" in 1.837287179s" May 27 17:04:17.954860 containerd[1873]: time="2025-05-27T17:04:17.954475391Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\"" May 27 17:04:17.954992 containerd[1873]: time="2025-05-27T17:04:17.954969195Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 17:04:18.640084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1142458220.mount: Deactivated successfully. May 27 17:04:19.587763 containerd[1873]: time="2025-05-27T17:04:19.587708955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:19.590071 containerd[1873]: time="2025-05-27T17:04:19.590035050Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" May 27 17:04:19.593456 containerd[1873]: time="2025-05-27T17:04:19.593430308Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:19.597545 containerd[1873]: time="2025-05-27T17:04:19.597484793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:19.598197 containerd[1873]: time="2025-05-27T17:04:19.598106634Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.64310995s" May 27 17:04:19.598197 containerd[1873]: time="2025-05-27T17:04:19.598143588Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 27 17:04:19.598598 containerd[1873]: time="2025-05-27T17:04:19.598576645Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 17:04:20.160923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1811567380.mount: Deactivated successfully. May 27 17:04:20.187958 containerd[1873]: time="2025-05-27T17:04:20.187551878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:04:20.191766 containerd[1873]: time="2025-05-27T17:04:20.191715687Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 27 17:04:20.195922 containerd[1873]: time="2025-05-27T17:04:20.195862768Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:04:20.201220 containerd[1873]: time="2025-05-27T17:04:20.201159519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:04:20.201853 containerd[1873]: time="2025-05-27T17:04:20.201497749Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 602.893679ms" May 27 17:04:20.201853 containerd[1873]: time="2025-05-27T17:04:20.201527950Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 27 17:04:20.202030 containerd[1873]: time="2025-05-27T17:04:20.202003346Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 27 17:04:20.901004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount556009817.mount: Deactivated successfully. May 27 17:04:21.324803 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 27 17:04:21.326718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:04:21.452543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:04:21.459151 (kubelet)[2823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:04:21.489230 kubelet[2823]: E0527 17:04:21.489169 2823 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:04:21.491339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:04:21.491458 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:04:21.491723 systemd[1]: kubelet.service: Consumed 116ms CPU time, 107M memory peak. May 27 17:04:23.734607 containerd[1873]: time="2025-05-27T17:04:23.734546120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:23.737379 containerd[1873]: time="2025-05-27T17:04:23.737335201Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" May 27 17:04:23.741798 containerd[1873]: time="2025-05-27T17:04:23.741738348Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:23.746081 containerd[1873]: time="2025-05-27T17:04:23.745994017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:23.746649 containerd[1873]: time="2025-05-27T17:04:23.746618923Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.54458552s" May 27 17:04:23.746749 containerd[1873]: time="2025-05-27T17:04:23.746732927Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 27 17:04:25.809645 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:04:25.809762 systemd[1]: kubelet.service: Consumed 116ms CPU time, 107M memory peak. May 27 17:04:25.811881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:04:25.837234 systemd[1]: Reload requested from client PID 2891 ('systemctl') (unit session-9.scope)... May 27 17:04:25.837248 systemd[1]: Reloading... May 27 17:04:25.945851 zram_generator::config[2949]: No configuration found. May 27 17:04:26.007881 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:04:26.093257 systemd[1]: Reloading finished in 255 ms. May 27 17:04:26.140384 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 17:04:26.140457 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 17:04:26.140755 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:04:26.140803 systemd[1]: kubelet.service: Consumed 80ms CPU time, 95.2M memory peak. May 27 17:04:26.142346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:04:26.337508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:04:26.344105 (kubelet)[3004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:04:26.466897 kubelet[3004]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:04:26.466897 kubelet[3004]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:04:26.466897 kubelet[3004]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:04:26.467570 kubelet[3004]: I0527 17:04:26.467520 3004 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:04:26.771101 kubelet[3004]: I0527 17:04:26.770979 3004 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 17:04:26.771101 kubelet[3004]: I0527 17:04:26.771018 3004 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:04:26.771507 kubelet[3004]: I0527 17:04:26.771243 3004 server.go:954] "Client rotation is on, will bootstrap in background" May 27 17:04:26.786884 kubelet[3004]: E0527 17:04:26.786807 3004 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" May 27 17:04:26.788040 kubelet[3004]: I0527 17:04:26.787988 3004 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:04:26.793986 kubelet[3004]: I0527 17:04:26.793957 3004 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:04:26.797307 kubelet[3004]: I0527 17:04:26.797287 3004 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:04:26.797850 kubelet[3004]: I0527 17:04:26.797795 3004 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:04:26.798010 kubelet[3004]: I0527 17:04:26.797852 3004 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.0.0-a-f939a1e004","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:04:26.798094 kubelet[3004]: I0527 17:04:26.798018 3004 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:04:26.798094 kubelet[3004]: I0527 17:04:26.798026 3004 container_manager_linux.go:304] "Creating device plugin manager" May 27 17:04:26.798176 kubelet[3004]: I0527 17:04:26.798162 3004 state_mem.go:36] "Initialized new in-memory state store" May 27 17:04:26.799817 kubelet[3004]: I0527 17:04:26.799796 3004 kubelet.go:446] "Attempting to sync node with API server" May 27 17:04:26.799864 kubelet[3004]: I0527 17:04:26.799841 3004 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:04:26.799864 kubelet[3004]: I0527 17:04:26.799864 3004 kubelet.go:352] "Adding apiserver pod source" May 27 17:04:26.799909 kubelet[3004]: I0527 17:04:26.799873 3004 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:04:26.804703 kubelet[3004]: W0527 17:04:26.804442 3004 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.0.0-a-f939a1e004&limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused May 27 17:04:26.804703 kubelet[3004]: E0527 17:04:26.804530 3004 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.0.0-a-f939a1e004&limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" May 27 17:04:26.804703 kubelet[3004]: W0527 17:04:26.804651 3004 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused May 27 17:04:26.804703 kubelet[3004]: E0527 17:04:26.804692 3004 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" May 27 17:04:26.804890 kubelet[3004]: I0527 17:04:26.804801 3004 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:04:26.805392 kubelet[3004]: I0527 17:04:26.805184 3004 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 17:04:26.805392 kubelet[3004]: W0527 17:04:26.805249 3004 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 17:04:26.806088 kubelet[3004]: I0527 17:04:26.806070 3004 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:04:26.806295 kubelet[3004]: I0527 17:04:26.806284 3004 server.go:1287] "Started kubelet" May 27 17:04:26.807589 kubelet[3004]: I0527 17:04:26.807566 3004 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:04:26.808526 kubelet[3004]: E0527 17:04:26.808070 3004 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.19:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.19:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.0.0-a-f939a1e004.18437120fa8bfd6b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.0.0-a-f939a1e004,UID:ci-4344.0.0-a-f939a1e004,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.0.0-a-f939a1e004,},FirstTimestamp:2025-05-27 17:04:26.806254955 +0000 UTC m=+0.459047037,LastTimestamp:2025-05-27 17:04:26.806254955 +0000 UTC m=+0.459047037,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.0.0-a-f939a1e004,}" May 27 17:04:26.809035 kubelet[3004]: I0527 17:04:26.808984 3004 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:04:26.809744 kubelet[3004]: I0527 17:04:26.809723 3004 server.go:479] "Adding debug handlers to kubelet server" May 27 17:04:26.810809 kubelet[3004]: I0527 17:04:26.810746 3004 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:04:26.811013 kubelet[3004]: I0527 17:04:26.810996 3004 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:04:26.811250 kubelet[3004]: I0527 17:04:26.811227 3004 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:04:26.813207 kubelet[3004]: I0527 17:04:26.813186 3004 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:04:26.814058 kubelet[3004]: I0527 17:04:26.813344 3004 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:04:26.814058 kubelet[3004]: I0527 17:04:26.813414 3004 reconciler.go:26] "Reconciler: start to sync state" May 27 17:04:26.814058 kubelet[3004]: W0527 17:04:26.813799 3004 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused May 27 17:04:26.814058 kubelet[3004]: E0527 17:04:26.813861 3004 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" May 27 17:04:26.814386 kubelet[3004]: I0527 17:04:26.814366 3004 factory.go:221] Registration of the systemd container factory successfully May 27 17:04:26.814483 kubelet[3004]: I0527 17:04:26.814465 3004 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:04:26.815076 kubelet[3004]: E0527 17:04:26.815059 3004 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:04:26.815185 kubelet[3004]: E0527 17:04:26.815124 3004 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.0.0-a-f939a1e004\" not found" May 27 17:04:26.815412 kubelet[3004]: E0527 17:04:26.815371 3004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-a-f939a1e004?timeout=10s\": dial tcp 10.200.20.19:6443: connect: connection refused" interval="200ms" May 27 17:04:26.815772 kubelet[3004]: I0527 17:04:26.815752 3004 factory.go:221] Registration of the containerd container factory successfully May 27 17:04:26.842894 kubelet[3004]: I0527 17:04:26.842869 3004 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:04:26.842894 kubelet[3004]: I0527 17:04:26.842887 3004 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:04:26.843032 kubelet[3004]: I0527 17:04:26.842909 3004 state_mem.go:36] "Initialized new in-memory state store" May 27 17:04:26.867689 kubelet[3004]: I0527 17:04:26.867642 3004 policy_none.go:49] "None policy: Start" May 27 17:04:26.867843 kubelet[3004]: I0527 17:04:26.867707 3004 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:04:26.867843 kubelet[3004]: I0527 17:04:26.867722 3004 state_mem.go:35] "Initializing new in-memory state store" May 27 17:04:26.876588 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 17:04:26.890192 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 17:04:26.893710 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 17:04:26.913838 kubelet[3004]: I0527 17:04:26.912124 3004 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 17:04:26.913838 kubelet[3004]: I0527 17:04:26.912332 3004 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:04:26.913838 kubelet[3004]: I0527 17:04:26.912342 3004 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:04:26.914324 kubelet[3004]: I0527 17:04:26.914305 3004 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:04:26.916427 kubelet[3004]: I0527 17:04:26.916402 3004 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 17:04:26.917134 kubelet[3004]: E0527 17:04:26.917119 3004 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:04:26.917268 kubelet[3004]: E0527 17:04:26.917257 3004 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.0.0-a-f939a1e004\" not found" May 27 17:04:26.917885 kubelet[3004]: I0527 17:04:26.917865 3004 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 17:04:26.918087 kubelet[3004]: I0527 17:04:26.918076 3004 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 17:04:26.918285 kubelet[3004]: I0527 17:04:26.918273 3004 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:04:26.918378 kubelet[3004]: I0527 17:04:26.918371 3004 kubelet.go:2382] "Starting kubelet main sync loop" May 27 17:04:26.918474 kubelet[3004]: E0527 17:04:26.918463 3004 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 27 17:04:26.919640 kubelet[3004]: W0527 17:04:26.918895 3004 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused May 27 17:04:26.919754 kubelet[3004]: E0527 17:04:26.919741 3004 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" May 27 17:04:27.015095 kubelet[3004]: I0527 17:04:27.015062 3004 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-f939a1e004" May 27 17:04:27.015483 kubelet[3004]: E0527 17:04:27.015451 3004 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.19:6443/api/v1/nodes\": dial tcp 10.200.20.19:6443: connect: connection refused" node="ci-4344.0.0-a-f939a1e004" May 27 17:04:27.015794 kubelet[3004]: E0527 17:04:27.015768 3004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-a-f939a1e004?timeout=10s\": dial tcp 10.200.20.19:6443: connect: connection refused" interval="400ms" May 27 17:04:27.029636 systemd[1]: Created slice kubepods-burstable-pod4baff5ffc8de94474fd6d53ae63489d2.slice - libcontainer container kubepods-burstable-pod4baff5ffc8de94474fd6d53ae63489d2.slice. May 27 17:04:27.040634 kubelet[3004]: E0527 17:04:27.040591 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-f939a1e004\" not found" node="ci-4344.0.0-a-f939a1e004" May 27 17:04:27.043773 systemd[1]: Created slice kubepods-burstable-pode1898bf0adaa9c2cb10b857bdb19649c.slice - libcontainer container kubepods-burstable-pode1898bf0adaa9c2cb10b857bdb19649c.slice. May 27 17:04:27.045901 kubelet[3004]: E0527 17:04:27.045869 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-f939a1e004\" not found" node="ci-4344.0.0-a-f939a1e004" May 27 17:04:27.048155 systemd[1]: Created slice kubepods-burstable-poda67109a1ad843b3b372e14d949387cba.slice - libcontainer container kubepods-burstable-poda67109a1ad843b3b372e14d949387cba.slice. May 27 17:04:27.049884 kubelet[3004]: E0527 17:04:27.049676 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-f939a1e004\" not found" node="ci-4344.0.0-a-f939a1e004" May 27 17:04:27.115031 kubelet[3004]: I0527 17:04:27.114990 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4baff5ffc8de94474fd6d53ae63489d2-kubeconfig\") pod \"kube-controller-manager-ci-4344.0.0-a-f939a1e004\" (UID: \"4baff5ffc8de94474fd6d53ae63489d2\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-f939a1e004" May 27 17:04:27.115175 kubelet[3004]: I0527 17:04:27.115067 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e1898bf0adaa9c2cb10b857bdb19649c-kubeconfig\") pod \"kube-scheduler-ci-4344.0.0-a-f939a1e004\" (UID: \"e1898bf0adaa9c2cb10b857bdb19649c\") " pod="kube-system/kube-scheduler-ci-4344.0.0-a-f939a1e004" May 27 17:04:27.115175 kubelet[3004]: I0527 17:04:27.115082 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a67109a1ad843b3b372e14d949387cba-ca-certs\") pod \"kube-apiserver-ci-4344.0.0-a-f939a1e004\" (UID: \"a67109a1ad843b3b372e14d949387cba\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-f939a1e004" May 27 17:04:27.115175 kubelet[3004]: I0527 17:04:27.115094 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a67109a1ad843b3b372e14d949387cba-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.0.0-a-f939a1e004\" (UID: \"a67109a1ad843b3b372e14d949387cba\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-f939a1e004" May 27 17:04:27.115175 kubelet[3004]: I0527 17:04:27.115106 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4baff5ffc8de94474fd6d53ae63489d2-ca-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-f939a1e004\" (UID: \"4baff5ffc8de94474fd6d53ae63489d2\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-f939a1e004" May 27 17:04:27.115175 kubelet[3004]: I0527 17:04:27.115155 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4baff5ffc8de94474fd6d53ae63489d2-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.0.0-a-f939a1e004\" (UID: \"4baff5ffc8de94474fd6d53ae63489d2\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-f939a1e004" May 27 17:04:27.115261 kubelet[3004]: I0527 17:04:27.115165 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4baff5ffc8de94474fd6d53ae63489d2-k8s-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-f939a1e004\" (UID: \"4baff5ffc8de94474fd6d53ae63489d2\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-f939a1e004" May 27 17:04:27.115261 kubelet[3004]: I0527 17:04:27.115177 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4baff5ffc8de94474fd6d53ae63489d2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.0.0-a-f939a1e004\" (UID: \"4baff5ffc8de94474fd6d53ae63489d2\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-f939a1e004" May 27 17:04:27.115261 kubelet[3004]: I0527 17:04:27.115186 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a67109a1ad843b3b372e14d949387cba-k8s-certs\") pod \"kube-apiserver-ci-4344.0.0-a-f939a1e004\" (UID: \"a67109a1ad843b3b372e14d949387cba\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-f939a1e004" May 27 17:04:27.217081 kubelet[3004]: I0527 17:04:27.217044 3004 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-f939a1e004" May 27 17:04:27.217417 kubelet[3004]: E0527 17:04:27.217393 3004 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.19:6443/api/v1/nodes\": dial tcp 10.200.20.19:6443: connect: connection refused" node="ci-4344.0.0-a-f939a1e004" May 27 17:04:27.342128 containerd[1873]: time="2025-05-27T17:04:27.342091142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.0.0-a-f939a1e004,Uid:4baff5ffc8de94474fd6d53ae63489d2,Namespace:kube-system,Attempt:0,}" May 27 17:04:27.347720 containerd[1873]: time="2025-05-27T17:04:27.347683351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.0.0-a-f939a1e004,Uid:e1898bf0adaa9c2cb10b857bdb19649c,Namespace:kube-system,Attempt:0,}" May 27 17:04:27.351057 containerd[1873]: time="2025-05-27T17:04:27.351022184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.0.0-a-f939a1e004,Uid:a67109a1ad843b3b372e14d949387cba,Namespace:kube-system,Attempt:0,}" May 27 17:04:27.417032 kubelet[3004]: E0527 17:04:27.416983 3004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-a-f939a1e004?timeout=10s\": dial tcp 10.200.20.19:6443: connect: connection refused" interval="800ms" May 27 17:04:27.431694 containerd[1873]: time="2025-05-27T17:04:27.431643304Z" level=info msg="connecting to shim e661e5ef58a5064af6c173b183c50c497c8e109a2b6a343fcff5c5d715254ad9" address="unix:///run/containerd/s/dbc7e399b3f140351f3d7a27356c109c79b4b89d72f6d6c5ed277797eadc59f5" namespace=k8s.io protocol=ttrpc version=3 May 27 17:04:27.450045 systemd[1]: Started cri-containerd-e661e5ef58a5064af6c173b183c50c497c8e109a2b6a343fcff5c5d715254ad9.scope - libcontainer container e661e5ef58a5064af6c173b183c50c497c8e109a2b6a343fcff5c5d715254ad9. May 27 17:04:27.473029 containerd[1873]: time="2025-05-27T17:04:27.472979760Z" level=info msg="connecting to shim 5fea536b516ee627d7be0d645afe78d2d41e4b8b87683610614d5f01a7fa9ce7" address="unix:///run/containerd/s/62fe054ab384f5e1794cb95955bdef1ec69278ed07738be5efd7aa39aeb534c8" namespace=k8s.io protocol=ttrpc version=3 May 27 17:04:27.487098 containerd[1873]: time="2025-05-27T17:04:27.487031307Z" level=info msg="connecting to shim 5e3708dcd9da7ef1e2ec11244360699b579d1e26e88f0e6ac24028bd96af94f8" address="unix:///run/containerd/s/dc604901389f450eaeeea68db85eeedefd15d6f0aba6889fb75c2991bd2238a8" namespace=k8s.io protocol=ttrpc version=3 May 27 17:04:27.506008 systemd[1]: Started cri-containerd-5fea536b516ee627d7be0d645afe78d2d41e4b8b87683610614d5f01a7fa9ce7.scope - libcontainer container 5fea536b516ee627d7be0d645afe78d2d41e4b8b87683610614d5f01a7fa9ce7. May 27 17:04:27.510802 systemd[1]: Started cri-containerd-5e3708dcd9da7ef1e2ec11244360699b579d1e26e88f0e6ac24028bd96af94f8.scope - libcontainer container 5e3708dcd9da7ef1e2ec11244360699b579d1e26e88f0e6ac24028bd96af94f8. May 27 17:04:27.525449 containerd[1873]: time="2025-05-27T17:04:27.525392230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.0.0-a-f939a1e004,Uid:4baff5ffc8de94474fd6d53ae63489d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e661e5ef58a5064af6c173b183c50c497c8e109a2b6a343fcff5c5d715254ad9\"" May 27 17:04:27.534725 containerd[1873]: time="2025-05-27T17:04:27.534681435Z" level=info msg="CreateContainer within sandbox \"e661e5ef58a5064af6c173b183c50c497c8e109a2b6a343fcff5c5d715254ad9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 17:04:27.562218 containerd[1873]: time="2025-05-27T17:04:27.562172678Z" level=info msg="Container 2c4f68addbcd27fa7d739ec82bddc4814849e0b81d223d59432131936d895210: CDI devices from CRI Config.CDIDevices: []" May 27 17:04:27.568222 containerd[1873]: time="2025-05-27T17:04:27.568151627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.0.0-a-f939a1e004,Uid:a67109a1ad843b3b372e14d949387cba,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e3708dcd9da7ef1e2ec11244360699b579d1e26e88f0e6ac24028bd96af94f8\"" May 27 17:04:27.570719 containerd[1873]: time="2025-05-27T17:04:27.570626441Z" level=info msg="CreateContainer within sandbox \"5e3708dcd9da7ef1e2ec11244360699b579d1e26e88f0e6ac24028bd96af94f8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 17:04:27.572535 containerd[1873]: time="2025-05-27T17:04:27.572456059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.0.0-a-f939a1e004,Uid:e1898bf0adaa9c2cb10b857bdb19649c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fea536b516ee627d7be0d645afe78d2d41e4b8b87683610614d5f01a7fa9ce7\"" May 27 17:04:27.574602 containerd[1873]: time="2025-05-27T17:04:27.574561165Z" level=info msg="CreateContainer within sandbox \"5fea536b516ee627d7be0d645afe78d2d41e4b8b87683610614d5f01a7fa9ce7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 17:04:27.588497 containerd[1873]: time="2025-05-27T17:04:27.588454163Z" level=info msg="CreateContainer within sandbox \"e661e5ef58a5064af6c173b183c50c497c8e109a2b6a343fcff5c5d715254ad9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2c4f68addbcd27fa7d739ec82bddc4814849e0b81d223d59432131936d895210\"" May 27 17:04:27.589395 containerd[1873]: time="2025-05-27T17:04:27.589368312Z" level=info msg="StartContainer for \"2c4f68addbcd27fa7d739ec82bddc4814849e0b81d223d59432131936d895210\"" May 27 17:04:27.590810 containerd[1873]: time="2025-05-27T17:04:27.590772645Z" level=info msg="connecting to shim 2c4f68addbcd27fa7d739ec82bddc4814849e0b81d223d59432131936d895210" address="unix:///run/containerd/s/dbc7e399b3f140351f3d7a27356c109c79b4b89d72f6d6c5ed277797eadc59f5" protocol=ttrpc version=3 May 27 17:04:27.607102 systemd[1]: Started cri-containerd-2c4f68addbcd27fa7d739ec82bddc4814849e0b81d223d59432131936d895210.scope - libcontainer container 2c4f68addbcd27fa7d739ec82bddc4814849e0b81d223d59432131936d895210. May 27 17:04:27.612505 containerd[1873]: time="2025-05-27T17:04:27.612465097Z" level=info msg="Container 55df8f931b930c6e5179d6da18ddbae4a8be9c3312a76d37e5fb591b801b0053: CDI devices from CRI Config.CDIDevices: []" May 27 17:04:27.619722 kubelet[3004]: I0527 17:04:27.619673 3004 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-f939a1e004" May 27 17:04:27.622607 kubelet[3004]: E0527 17:04:27.620357 3004 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.19:6443/api/v1/nodes\": dial tcp 10.200.20.19:6443: connect: connection refused" node="ci-4344.0.0-a-f939a1e004" May 27 17:04:27.627218 containerd[1873]: time="2025-05-27T17:04:27.627122736Z" level=info msg="Container c9281fea87d9b7120e19d6c233e53fd06802f1c6247b03be47934ae9d4725985: CDI devices from CRI Config.CDIDevices: []" May 27 17:04:27.635886 kubelet[3004]: W0527 17:04:27.635801 3004 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused May 27 17:04:27.636177 kubelet[3004]: E0527 17:04:27.636080 3004 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" May 27 17:04:27.642190 containerd[1873]: time="2025-05-27T17:04:27.642046454Z" level=info msg="CreateContainer within sandbox \"5e3708dcd9da7ef1e2ec11244360699b579d1e26e88f0e6ac24028bd96af94f8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"55df8f931b930c6e5179d6da18ddbae4a8be9c3312a76d37e5fb591b801b0053\"" May 27 17:04:27.642728 containerd[1873]: time="2025-05-27T17:04:27.642691611Z" level=info msg="StartContainer for \"55df8f931b930c6e5179d6da18ddbae4a8be9c3312a76d37e5fb591b801b0053\"" May 27 17:04:27.643683 containerd[1873]: time="2025-05-27T17:04:27.643651377Z" level=info msg="connecting to shim 55df8f931b930c6e5179d6da18ddbae4a8be9c3312a76d37e5fb591b801b0053" address="unix:///run/containerd/s/dc604901389f450eaeeea68db85eeedefd15d6f0aba6889fb75c2991bd2238a8" protocol=ttrpc version=3 May 27 17:04:27.661339 containerd[1873]: time="2025-05-27T17:04:27.661273709Z" level=info msg="CreateContainer within sandbox \"5fea536b516ee627d7be0d645afe78d2d41e4b8b87683610614d5f01a7fa9ce7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c9281fea87d9b7120e19d6c233e53fd06802f1c6247b03be47934ae9d4725985\"" May 27 17:04:27.662129 systemd[1]: Started cri-containerd-55df8f931b930c6e5179d6da18ddbae4a8be9c3312a76d37e5fb591b801b0053.scope - libcontainer container 55df8f931b930c6e5179d6da18ddbae4a8be9c3312a76d37e5fb591b801b0053. May 27 17:04:27.664209 containerd[1873]: time="2025-05-27T17:04:27.664170425Z" level=info msg="StartContainer for \"c9281fea87d9b7120e19d6c233e53fd06802f1c6247b03be47934ae9d4725985\"" May 27 17:04:27.664986 containerd[1873]: time="2025-05-27T17:04:27.664952673Z" level=info msg="StartContainer for \"2c4f68addbcd27fa7d739ec82bddc4814849e0b81d223d59432131936d895210\" returns successfully" May 27 17:04:27.666157 containerd[1873]: time="2025-05-27T17:04:27.665298612Z" level=info msg="connecting to shim c9281fea87d9b7120e19d6c233e53fd06802f1c6247b03be47934ae9d4725985" address="unix:///run/containerd/s/62fe054ab384f5e1794cb95955bdef1ec69278ed07738be5efd7aa39aeb534c8" protocol=ttrpc version=3 May 27 17:04:27.694017 systemd[1]: Started cri-containerd-c9281fea87d9b7120e19d6c233e53fd06802f1c6247b03be47934ae9d4725985.scope - libcontainer container c9281fea87d9b7120e19d6c233e53fd06802f1c6247b03be47934ae9d4725985. May 27 17:04:27.718773 containerd[1873]: time="2025-05-27T17:04:27.718575821Z" level=info msg="StartContainer for \"55df8f931b930c6e5179d6da18ddbae4a8be9c3312a76d37e5fb591b801b0053\" returns successfully" May 27 17:04:27.721744 kubelet[3004]: W0527 17:04:27.721640 3004 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused May 27 17:04:27.722840 kubelet[3004]: E0527 17:04:27.722109 3004 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" May 27 17:04:27.764185 containerd[1873]: time="2025-05-27T17:04:27.764150113Z" level=info msg="StartContainer for \"c9281fea87d9b7120e19d6c233e53fd06802f1c6247b03be47934ae9d4725985\" returns successfully" May 27 17:04:27.928337 kubelet[3004]: E0527 17:04:27.928039 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-f939a1e004\" not found" node="ci-4344.0.0-a-f939a1e004" May 27 17:04:27.932706 kubelet[3004]: E0527 17:04:27.932649 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-f939a1e004\" not found" node="ci-4344.0.0-a-f939a1e004" May 27 17:04:27.936763 kubelet[3004]: E0527 17:04:27.936739 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-f939a1e004\" not found" node="ci-4344.0.0-a-f939a1e004" May 27 17:04:28.427336 kubelet[3004]: I0527 17:04:28.427197 3004 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-f939a1e004" May 27 17:04:28.937842 kubelet[3004]: E0527 17:04:28.937794 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-f939a1e004\" not found" node="ci-4344.0.0-a-f939a1e004" May 27 17:04:28.939167 kubelet[3004]: E0527 17:04:28.938918 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-f939a1e004\" not found" node="ci-4344.0.0-a-f939a1e004" May 27 17:04:28.939611 kubelet[3004]: E0527 17:04:28.939590 3004 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-f939a1e004\" not found" node="ci-4344.0.0-a-f939a1e004" May 27 17:04:29.130269 kubelet[3004]: E0527 17:04:29.130224 3004 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.0.0-a-f939a1e004\" not found" node="ci-4344.0.0-a-f939a1e004" May 27 17:04:29.197373 kubelet[3004]: I0527 17:04:29.196982 3004 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.0.0-a-f939a1e004" May 27 17:04:29.216110 kubelet[3004]: I0527 17:04:29.216063 3004 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-a-f939a1e004" May 27 17:04:29.278434 kubelet[3004]: E0527 17:04:29.278385 3004 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.0.0-a-f939a1e004\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.0.0-a-f939a1e004" May 27 17:04:29.278765 kubelet[3004]: I0527 17:04:29.278418 3004 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-f939a1e004" May 27 17:04:29.282261 kubelet[3004]: E0527 17:04:29.282041 3004 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.0.0-a-f939a1e004\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-f939a1e004" May 27 17:04:29.282261 kubelet[3004]: I0527 17:04:29.282076 3004 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-a-f939a1e004" May 27 17:04:29.283958 kubelet[3004]: E0527 17:04:29.283930 3004 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.0.0-a-f939a1e004\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.0.0-a-f939a1e004" May 27 17:04:29.802296 kubelet[3004]: I0527 17:04:29.802208 3004 apiserver.go:52] "Watching apiserver" May 27 17:04:29.813775 kubelet[3004]: I0527 17:04:29.813735 3004 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:04:29.937233 kubelet[3004]: I0527 17:04:29.937086 3004 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-a-f939a1e004" May 27 17:04:29.949224 kubelet[3004]: W0527 17:04:29.949131 3004 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 17:04:31.379383 systemd[1]: Reload requested from client PID 3278 ('systemctl') (unit session-9.scope)... May 27 17:04:31.379732 systemd[1]: Reloading... May 27 17:04:31.475979 zram_generator::config[3330]: No configuration found. May 27 17:04:31.535560 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:04:31.630069 systemd[1]: Reloading finished in 249 ms. May 27 17:04:31.651606 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:04:31.666362 systemd[1]: kubelet.service: Deactivated successfully. May 27 17:04:31.666631 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:04:31.666689 systemd[1]: kubelet.service: Consumed 747ms CPU time, 127.6M memory peak. May 27 17:04:31.669244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:04:31.773865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:04:31.781180 (kubelet)[3388]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:04:31.816056 kubelet[3388]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:04:31.816056 kubelet[3388]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:04:31.816056 kubelet[3388]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:04:31.816409 kubelet[3388]: I0527 17:04:31.816148 3388 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:04:31.821839 kubelet[3388]: I0527 17:04:31.821793 3388 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 17:04:31.821839 kubelet[3388]: I0527 17:04:31.821840 3388 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:04:31.822068 kubelet[3388]: I0527 17:04:31.822049 3388 server.go:954] "Client rotation is on, will bootstrap in background" May 27 17:04:31.823158 kubelet[3388]: I0527 17:04:31.823135 3388 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 17:04:31.860334 kubelet[3388]: I0527 17:04:31.860145 3388 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:04:31.865974 kubelet[3388]: I0527 17:04:31.865869 3388 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:04:31.868850 kubelet[3388]: I0527 17:04:31.868606 3388 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:04:31.868850 kubelet[3388]: I0527 17:04:31.868746 3388 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:04:31.869110 kubelet[3388]: I0527 17:04:31.868770 3388 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.0.0-a-f939a1e004","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:04:31.869237 kubelet[3388]: I0527 17:04:31.869225 3388 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:04:31.869281 kubelet[3388]: I0527 17:04:31.869274 3388 container_manager_linux.go:304] "Creating device plugin manager" May 27 17:04:31.869359 kubelet[3388]: I0527 17:04:31.869351 3388 state_mem.go:36] "Initialized new in-memory state store" May 27 17:04:31.869535 kubelet[3388]: I0527 17:04:31.869528 3388 kubelet.go:446] "Attempting to sync node with API server" May 27 17:04:31.870239 kubelet[3388]: I0527 17:04:31.870224 3388 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:04:31.870407 kubelet[3388]: I0527 17:04:31.870359 3388 kubelet.go:352] "Adding apiserver pod source" May 27 17:04:31.870407 kubelet[3388]: I0527 17:04:31.870371 3388 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:04:31.879504 kubelet[3388]: I0527 17:04:31.879474 3388 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:04:31.880128 kubelet[3388]: I0527 17:04:31.880106 3388 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 17:04:31.881290 kubelet[3388]: I0527 17:04:31.880672 3388 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:04:31.881519 kubelet[3388]: I0527 17:04:31.881439 3388 server.go:1287] "Started kubelet" May 27 17:04:31.883506 kubelet[3388]: I0527 17:04:31.883459 3388 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:04:31.883912 kubelet[3388]: I0527 17:04:31.883767 3388 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:04:31.885278 kubelet[3388]: I0527 17:04:31.884754 3388 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:04:31.888116 kubelet[3388]: I0527 17:04:31.885430 3388 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:04:31.888972 kubelet[3388]: I0527 17:04:31.888951 3388 server.go:479] "Adding debug handlers to kubelet server" May 27 17:04:31.890277 kubelet[3388]: I0527 17:04:31.886166 3388 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:04:31.894933 kubelet[3388]: E0527 17:04:31.894903 3388 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:04:31.898049 kubelet[3388]: I0527 17:04:31.897374 3388 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:04:31.898049 kubelet[3388]: I0527 17:04:31.897503 3388 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:04:31.898049 kubelet[3388]: I0527 17:04:31.897609 3388 reconciler.go:26] "Reconciler: start to sync state" May 27 17:04:31.901790 kubelet[3388]: I0527 17:04:31.901438 3388 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 17:04:31.904833 kubelet[3388]: I0527 17:04:31.904774 3388 factory.go:221] Registration of the containerd container factory successfully May 27 17:04:31.904833 kubelet[3388]: I0527 17:04:31.904795 3388 factory.go:221] Registration of the systemd container factory successfully May 27 17:04:31.905164 kubelet[3388]: I0527 17:04:31.905107 3388 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:04:31.905433 kubelet[3388]: I0527 17:04:31.905407 3388 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 17:04:31.905433 kubelet[3388]: I0527 17:04:31.905433 3388 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 17:04:31.905557 kubelet[3388]: I0527 17:04:31.905541 3388 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:04:31.905557 kubelet[3388]: I0527 17:04:31.905554 3388 kubelet.go:2382] "Starting kubelet main sync loop" May 27 17:04:31.905714 kubelet[3388]: E0527 17:04:31.905599 3388 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:04:31.941099 kubelet[3388]: I0527 17:04:31.941074 3388 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:04:31.941254 kubelet[3388]: I0527 17:04:31.941243 3388 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:04:31.941311 kubelet[3388]: I0527 17:04:31.941304 3388 state_mem.go:36] "Initialized new in-memory state store" May 27 17:04:31.941504 kubelet[3388]: I0527 17:04:31.941489 3388 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 17:04:31.941566 kubelet[3388]: I0527 17:04:31.941546 3388 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 17:04:31.941645 kubelet[3388]: I0527 17:04:31.941637 3388 policy_none.go:49] "None policy: Start" May 27 17:04:31.941690 kubelet[3388]: I0527 17:04:31.941683 3388 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:04:31.941747 kubelet[3388]: I0527 17:04:31.941739 3388 state_mem.go:35] "Initializing new in-memory state store" May 27 17:04:31.941955 kubelet[3388]: I0527 17:04:31.941940 3388 state_mem.go:75] "Updated machine memory state" May 27 17:04:31.946048 kubelet[3388]: I0527 17:04:31.946024 3388 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 17:04:31.947870 kubelet[3388]: I0527 17:04:31.947851 3388 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:04:31.947949 kubelet[3388]: I0527 17:04:31.947868 3388 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:04:31.948536 kubelet[3388]: I0527 17:04:31.948521 3388 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:04:31.949658 kubelet[3388]: E0527 17:04:31.949631 3388 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:04:32.006750 kubelet[3388]: I0527 17:04:32.006715 3388 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-a-f939a1e004" May 27 17:04:32.007130 kubelet[3388]: I0527 17:04:32.007099 3388 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-f939a1e004" May 27 17:04:32.007365 kubelet[3388]: I0527 17:04:32.007267 3388 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-a-f939a1e004" May 27 17:04:32.014948 kubelet[3388]: W0527 17:04:32.014911 3388 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 17:04:32.019227 kubelet[3388]: W0527 17:04:32.019158 3388 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 17:04:32.019603 kubelet[3388]: W0527 17:04:32.019581 3388 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 17:04:32.019692 kubelet[3388]: E0527 17:04:32.019626 3388 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.0.0-a-f939a1e004\" already exists" pod="kube-system/kube-scheduler-ci-4344.0.0-a-f939a1e004" May 27 17:04:32.055382 kubelet[3388]: I0527 17:04:32.055356 3388 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-f939a1e004" May 27 17:04:32.067658 kubelet[3388]: I0527 17:04:32.067582 3388 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.0.0-a-f939a1e004" May 27 17:04:32.067842 kubelet[3388]: I0527 17:04:32.067749 3388 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.0.0-a-f939a1e004" May 27 17:04:32.099113 kubelet[3388]: I0527 17:04:32.099069 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4baff5ffc8de94474fd6d53ae63489d2-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.0.0-a-f939a1e004\" (UID: \"4baff5ffc8de94474fd6d53ae63489d2\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-f939a1e004" May 27 17:04:32.099113 kubelet[3388]: I0527 17:04:32.099113 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4baff5ffc8de94474fd6d53ae63489d2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.0.0-a-f939a1e004\" (UID: \"4baff5ffc8de94474fd6d53ae63489d2\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-f939a1e004" May 27 17:04:32.099293 kubelet[3388]: I0527 17:04:32.099133 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a67109a1ad843b3b372e14d949387cba-ca-certs\") pod \"kube-apiserver-ci-4344.0.0-a-f939a1e004\" (UID: \"a67109a1ad843b3b372e14d949387cba\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-f939a1e004" May 27 17:04:32.099293 kubelet[3388]: I0527 17:04:32.099145 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e1898bf0adaa9c2cb10b857bdb19649c-kubeconfig\") pod \"kube-scheduler-ci-4344.0.0-a-f939a1e004\" (UID: \"e1898bf0adaa9c2cb10b857bdb19649c\") " pod="kube-system/kube-scheduler-ci-4344.0.0-a-f939a1e004" May 27 17:04:32.099293 kubelet[3388]: I0527 17:04:32.099157 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a67109a1ad843b3b372e14d949387cba-k8s-certs\") pod \"kube-apiserver-ci-4344.0.0-a-f939a1e004\" (UID: \"a67109a1ad843b3b372e14d949387cba\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-f939a1e004" May 27 17:04:32.099293 kubelet[3388]: I0527 17:04:32.099170 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a67109a1ad843b3b372e14d949387cba-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.0.0-a-f939a1e004\" (UID: \"a67109a1ad843b3b372e14d949387cba\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-f939a1e004" May 27 17:04:32.099293 kubelet[3388]: I0527 17:04:32.099180 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4baff5ffc8de94474fd6d53ae63489d2-ca-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-f939a1e004\" (UID: \"4baff5ffc8de94474fd6d53ae63489d2\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-f939a1e004" May 27 17:04:32.099371 kubelet[3388]: I0527 17:04:32.099191 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4baff5ffc8de94474fd6d53ae63489d2-k8s-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-f939a1e004\" (UID: \"4baff5ffc8de94474fd6d53ae63489d2\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-f939a1e004" May 27 17:04:32.099371 kubelet[3388]: I0527 17:04:32.099204 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4baff5ffc8de94474fd6d53ae63489d2-kubeconfig\") pod \"kube-controller-manager-ci-4344.0.0-a-f939a1e004\" (UID: \"4baff5ffc8de94474fd6d53ae63489d2\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-f939a1e004" May 27 17:04:32.879180 kubelet[3388]: I0527 17:04:32.879070 3388 apiserver.go:52] "Watching apiserver" May 27 17:04:32.898522 kubelet[3388]: I0527 17:04:32.898476 3388 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:04:32.926335 kubelet[3388]: I0527 17:04:32.926305 3388 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-a-f939a1e004" May 27 17:04:32.936111 kubelet[3388]: W0527 17:04:32.935850 3388 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 17:04:32.936111 kubelet[3388]: E0527 17:04:32.935919 3388 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.0.0-a-f939a1e004\" already exists" pod="kube-system/kube-scheduler-ci-4344.0.0-a-f939a1e004" May 27 17:04:32.954513 kubelet[3388]: I0527 17:04:32.954403 3388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.0.0-a-f939a1e004" podStartSLOduration=0.95438228 podStartE2EDuration="954.38228ms" podCreationTimestamp="2025-05-27 17:04:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:04:32.945176579 +0000 UTC m=+1.160383588" watchObservedRunningTime="2025-05-27 17:04:32.95438228 +0000 UTC m=+1.169589289" May 27 17:04:32.964803 kubelet[3388]: I0527 17:04:32.964739 3388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-f939a1e004" podStartSLOduration=0.96472175 podStartE2EDuration="964.72175ms" podCreationTimestamp="2025-05-27 17:04:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:04:32.964651851 +0000 UTC m=+1.179858860" watchObservedRunningTime="2025-05-27 17:04:32.96472175 +0000 UTC m=+1.179928759" May 27 17:04:32.964995 kubelet[3388]: I0527 17:04:32.964816 3388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.0.0-a-f939a1e004" podStartSLOduration=3.964811858 podStartE2EDuration="3.964811858s" podCreationTimestamp="2025-05-27 17:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:04:32.954569744 +0000 UTC m=+1.169776753" watchObservedRunningTime="2025-05-27 17:04:32.964811858 +0000 UTC m=+1.180018867" May 27 17:04:38.050433 kubelet[3388]: I0527 17:04:38.050391 3388 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 17:04:38.051054 kubelet[3388]: I0527 17:04:38.050957 3388 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 17:04:38.051080 containerd[1873]: time="2025-05-27T17:04:38.050755607Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 17:04:38.659462 systemd[1]: Created slice kubepods-besteffort-pod379fc8ba_aa95_4f00_99e0_2745c045910d.slice - libcontainer container kubepods-besteffort-pod379fc8ba_aa95_4f00_99e0_2745c045910d.slice. May 27 17:04:38.741132 kubelet[3388]: I0527 17:04:38.740972 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/379fc8ba-aa95-4f00-99e0-2745c045910d-kube-proxy\") pod \"kube-proxy-d5fv2\" (UID: \"379fc8ba-aa95-4f00-99e0-2745c045910d\") " pod="kube-system/kube-proxy-d5fv2" May 27 17:04:38.741132 kubelet[3388]: I0527 17:04:38.741018 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/379fc8ba-aa95-4f00-99e0-2745c045910d-xtables-lock\") pod \"kube-proxy-d5fv2\" (UID: \"379fc8ba-aa95-4f00-99e0-2745c045910d\") " pod="kube-system/kube-proxy-d5fv2" May 27 17:04:38.741132 kubelet[3388]: I0527 17:04:38.741033 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/379fc8ba-aa95-4f00-99e0-2745c045910d-lib-modules\") pod \"kube-proxy-d5fv2\" (UID: \"379fc8ba-aa95-4f00-99e0-2745c045910d\") " pod="kube-system/kube-proxy-d5fv2" May 27 17:04:38.741132 kubelet[3388]: I0527 17:04:38.741070 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc6pn\" (UniqueName: \"kubernetes.io/projected/379fc8ba-aa95-4f00-99e0-2745c045910d-kube-api-access-wc6pn\") pod \"kube-proxy-d5fv2\" (UID: \"379fc8ba-aa95-4f00-99e0-2745c045910d\") " pod="kube-system/kube-proxy-d5fv2" May 27 17:04:38.846712 kubelet[3388]: E0527 17:04:38.846661 3388 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 27 17:04:38.847031 kubelet[3388]: E0527 17:04:38.846698 3388 projected.go:194] Error preparing data for projected volume kube-api-access-wc6pn for pod kube-system/kube-proxy-d5fv2: configmap "kube-root-ca.crt" not found May 27 17:04:38.847031 kubelet[3388]: E0527 17:04:38.846976 3388 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/379fc8ba-aa95-4f00-99e0-2745c045910d-kube-api-access-wc6pn podName:379fc8ba-aa95-4f00-99e0-2745c045910d nodeName:}" failed. No retries permitted until 2025-05-27 17:04:39.346941548 +0000 UTC m=+7.562148565 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wc6pn" (UniqueName: "kubernetes.io/projected/379fc8ba-aa95-4f00-99e0-2745c045910d-kube-api-access-wc6pn") pod "kube-proxy-d5fv2" (UID: "379fc8ba-aa95-4f00-99e0-2745c045910d") : configmap "kube-root-ca.crt" not found May 27 17:04:39.180858 kubelet[3388]: I0527 17:04:39.180515 3388 status_manager.go:890] "Failed to get status for pod" podUID="584cbd05-e2ab-4ab3-bc27-1f7206deaa1a" pod="tigera-operator/tigera-operator-844669ff44-2d5rv" err="pods \"tigera-operator-844669ff44-2d5rv\" is forbidden: User \"system:node:ci-4344.0.0-a-f939a1e004\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4344.0.0-a-f939a1e004' and this object" May 27 17:04:39.184321 systemd[1]: Created slice kubepods-besteffort-pod584cbd05_e2ab_4ab3_bc27_1f7206deaa1a.slice - libcontainer container kubepods-besteffort-pod584cbd05_e2ab_4ab3_bc27_1f7206deaa1a.slice. May 27 17:04:39.244508 kubelet[3388]: I0527 17:04:39.244464 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tms88\" (UniqueName: \"kubernetes.io/projected/584cbd05-e2ab-4ab3-bc27-1f7206deaa1a-kube-api-access-tms88\") pod \"tigera-operator-844669ff44-2d5rv\" (UID: \"584cbd05-e2ab-4ab3-bc27-1f7206deaa1a\") " pod="tigera-operator/tigera-operator-844669ff44-2d5rv" May 27 17:04:39.244750 kubelet[3388]: I0527 17:04:39.244716 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/584cbd05-e2ab-4ab3-bc27-1f7206deaa1a-var-lib-calico\") pod \"tigera-operator-844669ff44-2d5rv\" (UID: \"584cbd05-e2ab-4ab3-bc27-1f7206deaa1a\") " pod="tigera-operator/tigera-operator-844669ff44-2d5rv" May 27 17:04:39.491047 containerd[1873]: time="2025-05-27T17:04:39.490924518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-2d5rv,Uid:584cbd05-e2ab-4ab3-bc27-1f7206deaa1a,Namespace:tigera-operator,Attempt:0,}" May 27 17:04:39.551312 containerd[1873]: time="2025-05-27T17:04:39.551240180Z" level=info msg="connecting to shim 19d85920c878d35e48983b81a82e82f9c7e9ac5a5c28c8cca0438d885a280390" address="unix:///run/containerd/s/b0f846f74cd610faeee3ff5b412a2aeadace4df9de69f041aa3384e5cc4e9b5b" namespace=k8s.io protocol=ttrpc version=3 May 27 17:04:39.569978 systemd[1]: Started cri-containerd-19d85920c878d35e48983b81a82e82f9c7e9ac5a5c28c8cca0438d885a280390.scope - libcontainer container 19d85920c878d35e48983b81a82e82f9c7e9ac5a5c28c8cca0438d885a280390. May 27 17:04:39.571160 containerd[1873]: time="2025-05-27T17:04:39.571123560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d5fv2,Uid:379fc8ba-aa95-4f00-99e0-2745c045910d,Namespace:kube-system,Attempt:0,}" May 27 17:04:39.611684 containerd[1873]: time="2025-05-27T17:04:39.611639206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-2d5rv,Uid:584cbd05-e2ab-4ab3-bc27-1f7206deaa1a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"19d85920c878d35e48983b81a82e82f9c7e9ac5a5c28c8cca0438d885a280390\"" May 27 17:04:39.614096 containerd[1873]: time="2025-05-27T17:04:39.613975086Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 17:04:39.640532 containerd[1873]: time="2025-05-27T17:04:39.640484952Z" level=info msg="connecting to shim d18a98c58f0efa69b272ac500faa596b11a87e9b9f2bc19908d6cefc2f80e994" address="unix:///run/containerd/s/62da2ac8449fd9548b44df8342c8b647a6c35b302693b71a1713d4666b2b16c7" namespace=k8s.io protocol=ttrpc version=3 May 27 17:04:39.664024 systemd[1]: Started cri-containerd-d18a98c58f0efa69b272ac500faa596b11a87e9b9f2bc19908d6cefc2f80e994.scope - libcontainer container d18a98c58f0efa69b272ac500faa596b11a87e9b9f2bc19908d6cefc2f80e994. May 27 17:04:39.688964 containerd[1873]: time="2025-05-27T17:04:39.688916833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d5fv2,Uid:379fc8ba-aa95-4f00-99e0-2745c045910d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d18a98c58f0efa69b272ac500faa596b11a87e9b9f2bc19908d6cefc2f80e994\"" May 27 17:04:39.691928 containerd[1873]: time="2025-05-27T17:04:39.691883290Z" level=info msg="CreateContainer within sandbox \"d18a98c58f0efa69b272ac500faa596b11a87e9b9f2bc19908d6cefc2f80e994\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 17:04:39.723953 containerd[1873]: time="2025-05-27T17:04:39.723905389Z" level=info msg="Container 864cd78ab536df7d907e0392c15049672a14d1e33dba934c00eaa5167a0b58fb: CDI devices from CRI Config.CDIDevices: []" May 27 17:04:39.746321 containerd[1873]: time="2025-05-27T17:04:39.745900383Z" level=info msg="CreateContainer within sandbox \"d18a98c58f0efa69b272ac500faa596b11a87e9b9f2bc19908d6cefc2f80e994\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"864cd78ab536df7d907e0392c15049672a14d1e33dba934c00eaa5167a0b58fb\"" May 27 17:04:39.747754 containerd[1873]: time="2025-05-27T17:04:39.747716841Z" level=info msg="StartContainer for \"864cd78ab536df7d907e0392c15049672a14d1e33dba934c00eaa5167a0b58fb\"" May 27 17:04:39.750045 containerd[1873]: time="2025-05-27T17:04:39.749810127Z" level=info msg="connecting to shim 864cd78ab536df7d907e0392c15049672a14d1e33dba934c00eaa5167a0b58fb" address="unix:///run/containerd/s/62da2ac8449fd9548b44df8342c8b647a6c35b302693b71a1713d4666b2b16c7" protocol=ttrpc version=3 May 27 17:04:39.767031 systemd[1]: Started cri-containerd-864cd78ab536df7d907e0392c15049672a14d1e33dba934c00eaa5167a0b58fb.scope - libcontainer container 864cd78ab536df7d907e0392c15049672a14d1e33dba934c00eaa5167a0b58fb. May 27 17:04:39.801980 containerd[1873]: time="2025-05-27T17:04:39.801940943Z" level=info msg="StartContainer for \"864cd78ab536df7d907e0392c15049672a14d1e33dba934c00eaa5167a0b58fb\" returns successfully" May 27 17:04:40.000051 kubelet[3388]: I0527 17:04:39.999483 3388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d5fv2" podStartSLOduration=1.9994633670000002 podStartE2EDuration="1.999463367s" podCreationTimestamp="2025-05-27 17:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:04:39.956333214 +0000 UTC m=+8.171540223" watchObservedRunningTime="2025-05-27 17:04:39.999463367 +0000 UTC m=+8.214670376" May 27 17:04:41.479249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1682170871.mount: Deactivated successfully. May 27 17:04:41.820141 containerd[1873]: time="2025-05-27T17:04:41.820004936Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:41.823175 containerd[1873]: time="2025-05-27T17:04:41.823131389Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=22143480" May 27 17:04:41.826643 containerd[1873]: time="2025-05-27T17:04:41.826606224Z" level=info msg="ImageCreate event name:\"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:41.831707 containerd[1873]: time="2025-05-27T17:04:41.831664530Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:41.832234 containerd[1873]: time="2025-05-27T17:04:41.832083651Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"22139475\" in 2.218049611s" May 27 17:04:41.832234 containerd[1873]: time="2025-05-27T17:04:41.832115420Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\"" May 27 17:04:41.834931 containerd[1873]: time="2025-05-27T17:04:41.834858154Z" level=info msg="CreateContainer within sandbox \"19d85920c878d35e48983b81a82e82f9c7e9ac5a5c28c8cca0438d885a280390\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 17:04:41.863705 containerd[1873]: time="2025-05-27T17:04:41.863430360Z" level=info msg="Container f191c31610e2b9cdf2e3cb91bc80ae51337199050e0d63fd3588162c843ec34e: CDI devices from CRI Config.CDIDevices: []" May 27 17:04:41.882233 containerd[1873]: time="2025-05-27T17:04:41.882165501Z" level=info msg="CreateContainer within sandbox \"19d85920c878d35e48983b81a82e82f9c7e9ac5a5c28c8cca0438d885a280390\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f191c31610e2b9cdf2e3cb91bc80ae51337199050e0d63fd3588162c843ec34e\"" May 27 17:04:41.883185 containerd[1873]: time="2025-05-27T17:04:41.883143556Z" level=info msg="StartContainer for \"f191c31610e2b9cdf2e3cb91bc80ae51337199050e0d63fd3588162c843ec34e\"" May 27 17:04:41.884318 containerd[1873]: time="2025-05-27T17:04:41.884267041Z" level=info msg="connecting to shim f191c31610e2b9cdf2e3cb91bc80ae51337199050e0d63fd3588162c843ec34e" address="unix:///run/containerd/s/b0f846f74cd610faeee3ff5b412a2aeadace4df9de69f041aa3384e5cc4e9b5b" protocol=ttrpc version=3 May 27 17:04:41.903027 systemd[1]: Started cri-containerd-f191c31610e2b9cdf2e3cb91bc80ae51337199050e0d63fd3588162c843ec34e.scope - libcontainer container f191c31610e2b9cdf2e3cb91bc80ae51337199050e0d63fd3588162c843ec34e. May 27 17:04:41.932847 containerd[1873]: time="2025-05-27T17:04:41.932797604Z" level=info msg="StartContainer for \"f191c31610e2b9cdf2e3cb91bc80ae51337199050e0d63fd3588162c843ec34e\" returns successfully" May 27 17:04:45.963551 kubelet[3388]: I0527 17:04:45.963477 3388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-2d5rv" podStartSLOduration=4.743654019 podStartE2EDuration="6.963460645s" podCreationTimestamp="2025-05-27 17:04:39 +0000 UTC" firstStartedPulling="2025-05-27 17:04:39.613274977 +0000 UTC m=+7.828481994" lastFinishedPulling="2025-05-27 17:04:41.833081611 +0000 UTC m=+10.048288620" observedRunningTime="2025-05-27 17:04:41.973100119 +0000 UTC m=+10.188307128" watchObservedRunningTime="2025-05-27 17:04:45.963460645 +0000 UTC m=+14.178667654" May 27 17:04:47.254710 sudo[2370]: pam_unix(sudo:session): session closed for user root May 27 17:04:47.332057 sshd[2369]: Connection closed by 10.200.16.10 port 50670 May 27 17:04:47.335046 sshd-session[2367]: pam_unix(sshd:session): session closed for user core May 27 17:04:47.339081 systemd[1]: sshd@6-10.200.20.19:22-10.200.16.10:50670.service: Deactivated successfully. May 27 17:04:47.345226 systemd[1]: session-9.scope: Deactivated successfully. May 27 17:04:47.345411 systemd[1]: session-9.scope: Consumed 2.656s CPU time, 230.7M memory peak. May 27 17:04:47.347999 systemd-logind[1853]: Session 9 logged out. Waiting for processes to exit. May 27 17:04:47.351111 systemd-logind[1853]: Removed session 9. May 27 17:04:52.302872 systemd[1]: Created slice kubepods-besteffort-pod2496ff04_beb8_452d_889e_461d83ff1a9c.slice - libcontainer container kubepods-besteffort-pod2496ff04_beb8_452d_889e_461d83ff1a9c.slice. May 27 17:04:52.327665 kubelet[3388]: I0527 17:04:52.327563 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2496ff04-beb8-452d-889e-461d83ff1a9c-tigera-ca-bundle\") pod \"calico-typha-6b6bd84f6b-p92tj\" (UID: \"2496ff04-beb8-452d-889e-461d83ff1a9c\") " pod="calico-system/calico-typha-6b6bd84f6b-p92tj" May 27 17:04:52.327665 kubelet[3388]: I0527 17:04:52.327614 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2496ff04-beb8-452d-889e-461d83ff1a9c-typha-certs\") pod \"calico-typha-6b6bd84f6b-p92tj\" (UID: \"2496ff04-beb8-452d-889e-461d83ff1a9c\") " pod="calico-system/calico-typha-6b6bd84f6b-p92tj" May 27 17:04:52.327665 kubelet[3388]: I0527 17:04:52.327628 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2jwm\" (UniqueName: \"kubernetes.io/projected/2496ff04-beb8-452d-889e-461d83ff1a9c-kube-api-access-x2jwm\") pod \"calico-typha-6b6bd84f6b-p92tj\" (UID: \"2496ff04-beb8-452d-889e-461d83ff1a9c\") " pod="calico-system/calico-typha-6b6bd84f6b-p92tj" May 27 17:04:52.443863 systemd[1]: Created slice kubepods-besteffort-podadc66a2d_20ad_4879_906e_4e9a2794cc86.slice - libcontainer container kubepods-besteffort-podadc66a2d_20ad_4879_906e_4e9a2794cc86.slice. May 27 17:04:52.529062 kubelet[3388]: I0527 17:04:52.529002 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/adc66a2d-20ad-4879-906e-4e9a2794cc86-cni-bin-dir\") pod \"calico-node-zr4x7\" (UID: \"adc66a2d-20ad-4879-906e-4e9a2794cc86\") " pod="calico-system/calico-node-zr4x7" May 27 17:04:52.529062 kubelet[3388]: I0527 17:04:52.529057 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/adc66a2d-20ad-4879-906e-4e9a2794cc86-cni-log-dir\") pod \"calico-node-zr4x7\" (UID: \"adc66a2d-20ad-4879-906e-4e9a2794cc86\") " pod="calico-system/calico-node-zr4x7" May 27 17:04:52.529701 kubelet[3388]: I0527 17:04:52.529100 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/adc66a2d-20ad-4879-906e-4e9a2794cc86-lib-modules\") pod \"calico-node-zr4x7\" (UID: \"adc66a2d-20ad-4879-906e-4e9a2794cc86\") " pod="calico-system/calico-node-zr4x7" May 27 17:04:52.529701 kubelet[3388]: I0527 17:04:52.529111 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvhvw\" (UniqueName: \"kubernetes.io/projected/adc66a2d-20ad-4879-906e-4e9a2794cc86-kube-api-access-kvhvw\") pod \"calico-node-zr4x7\" (UID: \"adc66a2d-20ad-4879-906e-4e9a2794cc86\") " pod="calico-system/calico-node-zr4x7" May 27 17:04:52.529701 kubelet[3388]: I0527 17:04:52.529129 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/adc66a2d-20ad-4879-906e-4e9a2794cc86-node-certs\") pod \"calico-node-zr4x7\" (UID: \"adc66a2d-20ad-4879-906e-4e9a2794cc86\") " pod="calico-system/calico-node-zr4x7" May 27 17:04:52.529701 kubelet[3388]: I0527 17:04:52.529140 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/adc66a2d-20ad-4879-906e-4e9a2794cc86-xtables-lock\") pod \"calico-node-zr4x7\" (UID: \"adc66a2d-20ad-4879-906e-4e9a2794cc86\") " pod="calico-system/calico-node-zr4x7" May 27 17:04:52.529701 kubelet[3388]: I0527 17:04:52.529171 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adc66a2d-20ad-4879-906e-4e9a2794cc86-tigera-ca-bundle\") pod \"calico-node-zr4x7\" (UID: \"adc66a2d-20ad-4879-906e-4e9a2794cc86\") " pod="calico-system/calico-node-zr4x7" May 27 17:04:52.529797 kubelet[3388]: I0527 17:04:52.529185 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/adc66a2d-20ad-4879-906e-4e9a2794cc86-var-run-calico\") pod \"calico-node-zr4x7\" (UID: \"adc66a2d-20ad-4879-906e-4e9a2794cc86\") " pod="calico-system/calico-node-zr4x7" May 27 17:04:52.529797 kubelet[3388]: I0527 17:04:52.529197 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/adc66a2d-20ad-4879-906e-4e9a2794cc86-cni-net-dir\") pod \"calico-node-zr4x7\" (UID: \"adc66a2d-20ad-4879-906e-4e9a2794cc86\") " pod="calico-system/calico-node-zr4x7" May 27 17:04:52.529797 kubelet[3388]: I0527 17:04:52.529206 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/adc66a2d-20ad-4879-906e-4e9a2794cc86-flexvol-driver-host\") pod \"calico-node-zr4x7\" (UID: \"adc66a2d-20ad-4879-906e-4e9a2794cc86\") " pod="calico-system/calico-node-zr4x7" May 27 17:04:52.529797 kubelet[3388]: I0527 17:04:52.529293 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/adc66a2d-20ad-4879-906e-4e9a2794cc86-policysync\") pod \"calico-node-zr4x7\" (UID: \"adc66a2d-20ad-4879-906e-4e9a2794cc86\") " pod="calico-system/calico-node-zr4x7" May 27 17:04:52.529797 kubelet[3388]: I0527 17:04:52.529322 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/adc66a2d-20ad-4879-906e-4e9a2794cc86-var-lib-calico\") pod \"calico-node-zr4x7\" (UID: \"adc66a2d-20ad-4879-906e-4e9a2794cc86\") " pod="calico-system/calico-node-zr4x7" May 27 17:04:52.601919 kubelet[3388]: E0527 17:04:52.601683 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-plcbq" podUID="57d2f7fd-b442-44f4-9692-0fa48704b404" May 27 17:04:52.606256 containerd[1873]: time="2025-05-27T17:04:52.606212598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b6bd84f6b-p92tj,Uid:2496ff04-beb8-452d-889e-461d83ff1a9c,Namespace:calico-system,Attempt:0,}" May 27 17:04:52.630575 kubelet[3388]: I0527 17:04:52.630528 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/57d2f7fd-b442-44f4-9692-0fa48704b404-registration-dir\") pod \"csi-node-driver-plcbq\" (UID: \"57d2f7fd-b442-44f4-9692-0fa48704b404\") " pod="calico-system/csi-node-driver-plcbq" May 27 17:04:52.630721 kubelet[3388]: I0527 17:04:52.630588 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjdms\" (UniqueName: \"kubernetes.io/projected/57d2f7fd-b442-44f4-9692-0fa48704b404-kube-api-access-gjdms\") pod \"csi-node-driver-plcbq\" (UID: \"57d2f7fd-b442-44f4-9692-0fa48704b404\") " pod="calico-system/csi-node-driver-plcbq" May 27 17:04:52.630721 kubelet[3388]: I0527 17:04:52.630607 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57d2f7fd-b442-44f4-9692-0fa48704b404-kubelet-dir\") pod \"csi-node-driver-plcbq\" (UID: \"57d2f7fd-b442-44f4-9692-0fa48704b404\") " pod="calico-system/csi-node-driver-plcbq" May 27 17:04:52.630721 kubelet[3388]: I0527 17:04:52.630629 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/57d2f7fd-b442-44f4-9692-0fa48704b404-varrun\") pod \"csi-node-driver-plcbq\" (UID: \"57d2f7fd-b442-44f4-9692-0fa48704b404\") " pod="calico-system/csi-node-driver-plcbq" May 27 17:04:52.630721 kubelet[3388]: I0527 17:04:52.630668 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/57d2f7fd-b442-44f4-9692-0fa48704b404-socket-dir\") pod \"csi-node-driver-plcbq\" (UID: \"57d2f7fd-b442-44f4-9692-0fa48704b404\") " pod="calico-system/csi-node-driver-plcbq" May 27 17:04:52.639262 kubelet[3388]: E0527 17:04:52.637985 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.639262 kubelet[3388]: W0527 17:04:52.638028 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.639262 kubelet[3388]: E0527 17:04:52.638057 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.646025 kubelet[3388]: E0527 17:04:52.645851 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.646025 kubelet[3388]: W0527 17:04:52.645875 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.646025 kubelet[3388]: E0527 17:04:52.645897 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.654215 kubelet[3388]: E0527 17:04:52.652499 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.654215 kubelet[3388]: W0527 17:04:52.652528 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.654215 kubelet[3388]: E0527 17:04:52.652548 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.674438 containerd[1873]: time="2025-05-27T17:04:52.674119425Z" level=info msg="connecting to shim 67f15b76cdbe019e12f12d7bd8fe39a851aad1a5e9b426af7bfcf6a78476d27d" address="unix:///run/containerd/s/f3a0a432c9a74ad883b7f4bfea966d8a0cb39023841e5dcb415a4508d2117d82" namespace=k8s.io protocol=ttrpc version=3 May 27 17:04:52.698216 systemd[1]: Started cri-containerd-67f15b76cdbe019e12f12d7bd8fe39a851aad1a5e9b426af7bfcf6a78476d27d.scope - libcontainer container 67f15b76cdbe019e12f12d7bd8fe39a851aad1a5e9b426af7bfcf6a78476d27d. May 27 17:04:52.731722 kubelet[3388]: E0527 17:04:52.731685 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.731722 kubelet[3388]: W0527 17:04:52.731715 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.731893 kubelet[3388]: E0527 17:04:52.731736 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.732884 kubelet[3388]: E0527 17:04:52.732854 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.733016 kubelet[3388]: W0527 17:04:52.732994 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.733049 kubelet[3388]: E0527 17:04:52.733033 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.733552 kubelet[3388]: E0527 17:04:52.733526 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.733552 kubelet[3388]: W0527 17:04:52.733545 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.733641 kubelet[3388]: E0527 17:04:52.733568 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.733737 kubelet[3388]: E0527 17:04:52.733722 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.733737 kubelet[3388]: W0527 17:04:52.733731 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.733737 kubelet[3388]: E0527 17:04:52.733743 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.733937 kubelet[3388]: E0527 17:04:52.733922 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.733937 kubelet[3388]: W0527 17:04:52.733932 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.733997 kubelet[3388]: E0527 17:04:52.733945 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.734106 kubelet[3388]: E0527 17:04:52.734088 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.734106 kubelet[3388]: W0527 17:04:52.734098 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.734157 kubelet[3388]: E0527 17:04:52.734111 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.734271 kubelet[3388]: E0527 17:04:52.734254 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.734271 kubelet[3388]: W0527 17:04:52.734264 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.734323 kubelet[3388]: E0527 17:04:52.734276 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.734392 kubelet[3388]: E0527 17:04:52.734371 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.734392 kubelet[3388]: W0527 17:04:52.734385 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.734445 kubelet[3388]: E0527 17:04:52.734396 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.734515 kubelet[3388]: E0527 17:04:52.734501 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.734515 kubelet[3388]: W0527 17:04:52.734509 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.734557 kubelet[3388]: E0527 17:04:52.734549 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.734655 kubelet[3388]: E0527 17:04:52.734635 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.734655 kubelet[3388]: W0527 17:04:52.734643 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.734702 kubelet[3388]: E0527 17:04:52.734683 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.734763 kubelet[3388]: E0527 17:04:52.734750 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.734763 kubelet[3388]: W0527 17:04:52.734758 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.735071 kubelet[3388]: E0527 17:04:52.734835 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.735071 kubelet[3388]: E0527 17:04:52.734862 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.735071 kubelet[3388]: W0527 17:04:52.734866 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.735071 kubelet[3388]: E0527 17:04:52.734875 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.735071 kubelet[3388]: E0527 17:04:52.734968 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.735071 kubelet[3388]: W0527 17:04:52.734973 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.735071 kubelet[3388]: E0527 17:04:52.734979 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.735071 kubelet[3388]: E0527 17:04:52.735068 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.735071 kubelet[3388]: W0527 17:04:52.735073 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.735245 kubelet[3388]: E0527 17:04:52.735139 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.735245 kubelet[3388]: E0527 17:04:52.735170 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.735245 kubelet[3388]: W0527 17:04:52.735173 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.735245 kubelet[3388]: E0527 17:04:52.735241 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.735317 kubelet[3388]: E0527 17:04:52.735293 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.735317 kubelet[3388]: W0527 17:04:52.735297 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.735317 kubelet[3388]: E0527 17:04:52.735307 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.735423 kubelet[3388]: E0527 17:04:52.735408 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.735423 kubelet[3388]: W0527 17:04:52.735418 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.735486 kubelet[3388]: E0527 17:04:52.735424 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.735520 kubelet[3388]: E0527 17:04:52.735509 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.735520 kubelet[3388]: W0527 17:04:52.735516 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.735559 kubelet[3388]: E0527 17:04:52.735524 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.736039 kubelet[3388]: E0527 17:04:52.736015 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.736039 kubelet[3388]: W0527 17:04:52.736034 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.736121 kubelet[3388]: E0527 17:04:52.736053 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.736274 kubelet[3388]: E0527 17:04:52.736257 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.736274 kubelet[3388]: W0527 17:04:52.736270 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.736389 kubelet[3388]: E0527 17:04:52.736371 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.737345 kubelet[3388]: E0527 17:04:52.737319 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.737454 kubelet[3388]: W0527 17:04:52.737364 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.737721 kubelet[3388]: E0527 17:04:52.737646 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.738571 kubelet[3388]: E0527 17:04:52.738547 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.738571 kubelet[3388]: W0527 17:04:52.738564 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.739376 kubelet[3388]: E0527 17:04:52.739311 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.739982 kubelet[3388]: E0527 17:04:52.739915 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.739982 kubelet[3388]: W0527 17:04:52.739931 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.739982 kubelet[3388]: E0527 17:04:52.739945 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.740104 kubelet[3388]: E0527 17:04:52.740087 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.740104 kubelet[3388]: W0527 17:04:52.740099 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.740147 kubelet[3388]: E0527 17:04:52.740106 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.741048 kubelet[3388]: E0527 17:04:52.741029 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.741392 kubelet[3388]: W0527 17:04:52.741367 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.741392 kubelet[3388]: E0527 17:04:52.741393 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.751013 kubelet[3388]: E0527 17:04:52.750988 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:52.751305 kubelet[3388]: W0527 17:04:52.751238 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:52.751305 kubelet[3388]: E0527 17:04:52.751269 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:52.752814 containerd[1873]: time="2025-05-27T17:04:52.752465597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zr4x7,Uid:adc66a2d-20ad-4879-906e-4e9a2794cc86,Namespace:calico-system,Attempt:0,}" May 27 17:04:52.775728 containerd[1873]: time="2025-05-27T17:04:52.775684278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b6bd84f6b-p92tj,Uid:2496ff04-beb8-452d-889e-461d83ff1a9c,Namespace:calico-system,Attempt:0,} returns sandbox id \"67f15b76cdbe019e12f12d7bd8fe39a851aad1a5e9b426af7bfcf6a78476d27d\"" May 27 17:04:52.777994 containerd[1873]: time="2025-05-27T17:04:52.777890952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 17:04:52.811030 containerd[1873]: time="2025-05-27T17:04:52.810975541Z" level=info msg="connecting to shim ef6244c6fc649cf9d4777308c8a66a4a2be88c469958cdf977ee0f4d4caf9ba5" address="unix:///run/containerd/s/837d880339dd91038531aed8370c37d2734e84cef62588f071864a0f27a9f9f9" namespace=k8s.io protocol=ttrpc version=3 May 27 17:04:52.831987 systemd[1]: Started cri-containerd-ef6244c6fc649cf9d4777308c8a66a4a2be88c469958cdf977ee0f4d4caf9ba5.scope - libcontainer container ef6244c6fc649cf9d4777308c8a66a4a2be88c469958cdf977ee0f4d4caf9ba5. May 27 17:04:52.868528 containerd[1873]: time="2025-05-27T17:04:52.868409173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zr4x7,Uid:adc66a2d-20ad-4879-906e-4e9a2794cc86,Namespace:calico-system,Attempt:0,} returns sandbox id \"ef6244c6fc649cf9d4777308c8a66a4a2be88c469958cdf977ee0f4d4caf9ba5\"" May 27 17:04:54.111566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3675119285.mount: Deactivated successfully. May 27 17:04:54.526621 containerd[1873]: time="2025-05-27T17:04:54.526366166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:54.531063 containerd[1873]: time="2025-05-27T17:04:54.530999745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=33020269" May 27 17:04:54.535290 containerd[1873]: time="2025-05-27T17:04:54.535088775Z" level=info msg="ImageCreate event name:\"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:54.543042 containerd[1873]: time="2025-05-27T17:04:54.542360869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:54.543042 containerd[1873]: time="2025-05-27T17:04:54.542807431Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"33020123\" in 1.764884206s" May 27 17:04:54.543042 containerd[1873]: time="2025-05-27T17:04:54.542855081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\"" May 27 17:04:54.545018 containerd[1873]: time="2025-05-27T17:04:54.544952765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 17:04:54.555959 containerd[1873]: time="2025-05-27T17:04:54.555918129Z" level=info msg="CreateContainer within sandbox \"67f15b76cdbe019e12f12d7bd8fe39a851aad1a5e9b426af7bfcf6a78476d27d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 17:04:54.580146 containerd[1873]: time="2025-05-27T17:04:54.580052880Z" level=info msg="Container 9c002285b438b074aa894d71956e54b614e582d5903a684eeefb0568a9242a35: CDI devices from CRI Config.CDIDevices: []" May 27 17:04:54.601356 containerd[1873]: time="2025-05-27T17:04:54.601222576Z" level=info msg="CreateContainer within sandbox \"67f15b76cdbe019e12f12d7bd8fe39a851aad1a5e9b426af7bfcf6a78476d27d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9c002285b438b074aa894d71956e54b614e582d5903a684eeefb0568a9242a35\"" May 27 17:04:54.602472 containerd[1873]: time="2025-05-27T17:04:54.602437113Z" level=info msg="StartContainer for \"9c002285b438b074aa894d71956e54b614e582d5903a684eeefb0568a9242a35\"" May 27 17:04:54.603643 containerd[1873]: time="2025-05-27T17:04:54.603604880Z" level=info msg="connecting to shim 9c002285b438b074aa894d71956e54b614e582d5903a684eeefb0568a9242a35" address="unix:///run/containerd/s/f3a0a432c9a74ad883b7f4bfea966d8a0cb39023841e5dcb415a4508d2117d82" protocol=ttrpc version=3 May 27 17:04:54.620994 systemd[1]: Started cri-containerd-9c002285b438b074aa894d71956e54b614e582d5903a684eeefb0568a9242a35.scope - libcontainer container 9c002285b438b074aa894d71956e54b614e582d5903a684eeefb0568a9242a35. May 27 17:04:54.661108 containerd[1873]: time="2025-05-27T17:04:54.661064578Z" level=info msg="StartContainer for \"9c002285b438b074aa894d71956e54b614e582d5903a684eeefb0568a9242a35\" returns successfully" May 27 17:04:54.906148 kubelet[3388]: E0527 17:04:54.906084 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-plcbq" podUID="57d2f7fd-b442-44f4-9692-0fa48704b404" May 27 17:04:55.042471 kubelet[3388]: E0527 17:04:55.042436 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.042471 kubelet[3388]: W0527 17:04:55.042462 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.042471 kubelet[3388]: E0527 17:04:55.042484 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.042723 kubelet[3388]: E0527 17:04:55.042618 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.042723 kubelet[3388]: W0527 17:04:55.042625 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.042723 kubelet[3388]: E0527 17:04:55.042657 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.042861 kubelet[3388]: E0527 17:04:55.042761 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.042861 kubelet[3388]: W0527 17:04:55.042767 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.042861 kubelet[3388]: E0527 17:04:55.042774 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.042950 kubelet[3388]: E0527 17:04:55.042888 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.042950 kubelet[3388]: W0527 17:04:55.042897 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.042950 kubelet[3388]: E0527 17:04:55.042902 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.043024 kubelet[3388]: E0527 17:04:55.043000 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.043024 kubelet[3388]: W0527 17:04:55.043005 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.043024 kubelet[3388]: E0527 17:04:55.043010 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.043131 kubelet[3388]: E0527 17:04:55.043081 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.043131 kubelet[3388]: W0527 17:04:55.043085 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.043131 kubelet[3388]: E0527 17:04:55.043090 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.043230 kubelet[3388]: E0527 17:04:55.043164 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.043230 kubelet[3388]: W0527 17:04:55.043168 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.043230 kubelet[3388]: E0527 17:04:55.043172 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.043306 kubelet[3388]: E0527 17:04:55.043242 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.043306 kubelet[3388]: W0527 17:04:55.043246 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.043306 kubelet[3388]: E0527 17:04:55.043251 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.043399 kubelet[3388]: E0527 17:04:55.043333 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.043399 kubelet[3388]: W0527 17:04:55.043337 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.043399 kubelet[3388]: E0527 17:04:55.043342 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.043399 kubelet[3388]: E0527 17:04:55.043410 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.043399 kubelet[3388]: W0527 17:04:55.043414 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.043582 kubelet[3388]: E0527 17:04:55.043421 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.043582 kubelet[3388]: E0527 17:04:55.043487 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.043582 kubelet[3388]: W0527 17:04:55.043490 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.043582 kubelet[3388]: E0527 17:04:55.043494 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.043582 kubelet[3388]: E0527 17:04:55.043561 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.043582 kubelet[3388]: W0527 17:04:55.043565 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.043582 kubelet[3388]: E0527 17:04:55.043569 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.043782 kubelet[3388]: E0527 17:04:55.043655 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.043782 kubelet[3388]: W0527 17:04:55.043659 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.043782 kubelet[3388]: E0527 17:04:55.043663 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.043782 kubelet[3388]: E0527 17:04:55.043732 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.043782 kubelet[3388]: W0527 17:04:55.043735 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.043782 kubelet[3388]: E0527 17:04:55.043740 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.043973 kubelet[3388]: E0527 17:04:55.043809 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.043973 kubelet[3388]: W0527 17:04:55.043813 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.043973 kubelet[3388]: E0527 17:04:55.043829 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.052626 kubelet[3388]: E0527 17:04:55.052573 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.052626 kubelet[3388]: W0527 17:04:55.052591 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.052626 kubelet[3388]: E0527 17:04:55.052607 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.053191 kubelet[3388]: E0527 17:04:55.053152 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.053191 kubelet[3388]: W0527 17:04:55.053168 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.053854 kubelet[3388]: E0527 17:04:55.053180 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.054176 kubelet[3388]: E0527 17:04:55.054059 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.054176 kubelet[3388]: W0527 17:04:55.054072 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.054176 kubelet[3388]: E0527 17:04:55.054087 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.054322 kubelet[3388]: E0527 17:04:55.054253 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.054322 kubelet[3388]: W0527 17:04:55.054264 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.054322 kubelet[3388]: E0527 17:04:55.054272 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.054513 kubelet[3388]: E0527 17:04:55.054447 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.054513 kubelet[3388]: W0527 17:04:55.054457 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.054513 kubelet[3388]: E0527 17:04:55.054469 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.054588 kubelet[3388]: E0527 17:04:55.054578 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.054588 kubelet[3388]: W0527 17:04:55.054585 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.054629 kubelet[3388]: E0527 17:04:55.054596 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.054701 kubelet[3388]: E0527 17:04:55.054690 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.054701 kubelet[3388]: W0527 17:04:55.054696 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.054743 kubelet[3388]: E0527 17:04:55.054703 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.054833 kubelet[3388]: E0527 17:04:55.054814 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.054867 kubelet[3388]: W0527 17:04:55.054843 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.054867 kubelet[3388]: E0527 17:04:55.054850 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.055114 kubelet[3388]: E0527 17:04:55.055089 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.055114 kubelet[3388]: W0527 17:04:55.055100 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.055114 kubelet[3388]: E0527 17:04:55.055107 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.055225 kubelet[3388]: E0527 17:04:55.055209 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.055225 kubelet[3388]: W0527 17:04:55.055217 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.055301 kubelet[3388]: E0527 17:04:55.055279 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.055495 kubelet[3388]: E0527 17:04:55.055464 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.055495 kubelet[3388]: W0527 17:04:55.055478 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.055495 kubelet[3388]: E0527 17:04:55.055488 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.055897 kubelet[3388]: E0527 17:04:55.055779 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.055897 kubelet[3388]: W0527 17:04:55.055788 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.055897 kubelet[3388]: E0527 17:04:55.055890 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.055897 kubelet[3388]: W0527 17:04:55.055896 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.055973 kubelet[3388]: E0527 17:04:55.055910 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.056018 kubelet[3388]: E0527 17:04:55.055996 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.056018 kubelet[3388]: W0527 17:04:55.056004 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.056018 kubelet[3388]: E0527 17:04:55.056009 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.056121 kubelet[3388]: E0527 17:04:55.056108 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.056121 kubelet[3388]: W0527 17:04:55.056115 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.056121 kubelet[3388]: E0527 17:04:55.056121 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.056241 kubelet[3388]: E0527 17:04:55.056221 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.056241 kubelet[3388]: W0527 17:04:55.056228 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.056241 kubelet[3388]: E0527 17:04:55.056234 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.056307 kubelet[3388]: E0527 17:04:55.056267 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.056778 kubelet[3388]: E0527 17:04:55.056757 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.056778 kubelet[3388]: W0527 17:04:55.056776 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.056865 kubelet[3388]: E0527 17:04:55.056792 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.057001 kubelet[3388]: E0527 17:04:55.056988 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:55.057001 kubelet[3388]: W0527 17:04:55.056998 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:55.057044 kubelet[3388]: E0527 17:04:55.057006 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:55.866369 containerd[1873]: time="2025-05-27T17:04:55.865833986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:55.872675 containerd[1873]: time="2025-05-27T17:04:55.872609347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4264304" May 27 17:04:55.880028 containerd[1873]: time="2025-05-27T17:04:55.879492386Z" level=info msg="ImageCreate event name:\"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:55.886026 containerd[1873]: time="2025-05-27T17:04:55.885978344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:55.886698 containerd[1873]: time="2025-05-27T17:04:55.886667292Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5633505\" in 1.341388177s" May 27 17:04:55.886698 containerd[1873]: time="2025-05-27T17:04:55.886698893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\"" May 27 17:04:55.889639 containerd[1873]: time="2025-05-27T17:04:55.889602554Z" level=info msg="CreateContainer within sandbox \"ef6244c6fc649cf9d4777308c8a66a4a2be88c469958cdf977ee0f4d4caf9ba5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 17:04:55.918057 containerd[1873]: time="2025-05-27T17:04:55.917887273Z" level=info msg="Container 73afc8546e44cf589f239b53c86b96fcf9949b16fb214b59a58c701bb7c15d64: CDI devices from CRI Config.CDIDevices: []" May 27 17:04:55.921127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount857520186.mount: Deactivated successfully. May 27 17:04:55.943612 containerd[1873]: time="2025-05-27T17:04:55.943544631Z" level=info msg="CreateContainer within sandbox \"ef6244c6fc649cf9d4777308c8a66a4a2be88c469958cdf977ee0f4d4caf9ba5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"73afc8546e44cf589f239b53c86b96fcf9949b16fb214b59a58c701bb7c15d64\"" May 27 17:04:55.944368 containerd[1873]: time="2025-05-27T17:04:55.944312702Z" level=info msg="StartContainer for \"73afc8546e44cf589f239b53c86b96fcf9949b16fb214b59a58c701bb7c15d64\"" May 27 17:04:55.946579 containerd[1873]: time="2025-05-27T17:04:55.946505270Z" level=info msg="connecting to shim 73afc8546e44cf589f239b53c86b96fcf9949b16fb214b59a58c701bb7c15d64" address="unix:///run/containerd/s/837d880339dd91038531aed8370c37d2734e84cef62588f071864a0f27a9f9f9" protocol=ttrpc version=3 May 27 17:04:55.964023 systemd[1]: Started cri-containerd-73afc8546e44cf589f239b53c86b96fcf9949b16fb214b59a58c701bb7c15d64.scope - libcontainer container 73afc8546e44cf589f239b53c86b96fcf9949b16fb214b59a58c701bb7c15d64. May 27 17:04:55.992453 kubelet[3388]: I0527 17:04:55.992088 3388 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:04:56.017099 containerd[1873]: time="2025-05-27T17:04:56.017057298Z" level=info msg="StartContainer for \"73afc8546e44cf589f239b53c86b96fcf9949b16fb214b59a58c701bb7c15d64\" returns successfully" May 27 17:04:56.025699 systemd[1]: cri-containerd-73afc8546e44cf589f239b53c86b96fcf9949b16fb214b59a58c701bb7c15d64.scope: Deactivated successfully. May 27 17:04:56.030171 containerd[1873]: time="2025-05-27T17:04:56.030132018Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73afc8546e44cf589f239b53c86b96fcf9949b16fb214b59a58c701bb7c15d64\" id:\"73afc8546e44cf589f239b53c86b96fcf9949b16fb214b59a58c701bb7c15d64\" pid:4007 exited_at:{seconds:1748365496 nanos:29606613}" May 27 17:04:56.030310 containerd[1873]: time="2025-05-27T17:04:56.030277392Z" level=info msg="received exit event container_id:\"73afc8546e44cf589f239b53c86b96fcf9949b16fb214b59a58c701bb7c15d64\" id:\"73afc8546e44cf589f239b53c86b96fcf9949b16fb214b59a58c701bb7c15d64\" pid:4007 exited_at:{seconds:1748365496 nanos:29606613}" May 27 17:04:56.047708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73afc8546e44cf589f239b53c86b96fcf9949b16fb214b59a58c701bb7c15d64-rootfs.mount: Deactivated successfully. May 27 17:04:56.906710 kubelet[3388]: E0527 17:04:56.906654 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-plcbq" podUID="57d2f7fd-b442-44f4-9692-0fa48704b404" May 27 17:04:57.010961 kubelet[3388]: I0527 17:04:57.010259 3388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b6bd84f6b-p92tj" podStartSLOduration=3.244128404 podStartE2EDuration="5.010242451s" podCreationTimestamp="2025-05-27 17:04:52 +0000 UTC" firstStartedPulling="2025-05-27 17:04:52.777659247 +0000 UTC m=+20.992866256" lastFinishedPulling="2025-05-27 17:04:54.543773286 +0000 UTC m=+22.758980303" observedRunningTime="2025-05-27 17:04:54.999035151 +0000 UTC m=+23.214242160" watchObservedRunningTime="2025-05-27 17:04:57.010242451 +0000 UTC m=+25.225449460" May 27 17:04:58.000934 containerd[1873]: time="2025-05-27T17:04:58.000892103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 17:04:58.906013 kubelet[3388]: E0527 17:04:58.905923 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-plcbq" podUID="57d2f7fd-b442-44f4-9692-0fa48704b404" May 27 17:04:59.117182 kubelet[3388]: I0527 17:04:59.117141 3388 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:05:00.200217 containerd[1873]: time="2025-05-27T17:05:00.200159584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:00.203732 containerd[1873]: time="2025-05-27T17:05:00.203679905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=65748976" May 27 17:05:00.206508 containerd[1873]: time="2025-05-27T17:05:00.206401443Z" level=info msg="ImageCreate event name:\"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:00.210173 containerd[1873]: time="2025-05-27T17:05:00.210094754Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:00.210632 containerd[1873]: time="2025-05-27T17:05:00.210443784Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"67118217\" in 2.209419812s" May 27 17:05:00.210632 containerd[1873]: time="2025-05-27T17:05:00.210473481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\"" May 27 17:05:00.213500 containerd[1873]: time="2025-05-27T17:05:00.213460061Z" level=info msg="CreateContainer within sandbox \"ef6244c6fc649cf9d4777308c8a66a4a2be88c469958cdf977ee0f4d4caf9ba5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 17:05:00.242303 containerd[1873]: time="2025-05-27T17:05:00.240448327Z" level=info msg="Container d1410149af13c48408a34b2fa0f4a069c8c9e3afaa0cdb3391061f07cae157b0: CDI devices from CRI Config.CDIDevices: []" May 27 17:05:00.242025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3057345216.mount: Deactivated successfully. May 27 17:05:00.259178 containerd[1873]: time="2025-05-27T17:05:00.259131774Z" level=info msg="CreateContainer within sandbox \"ef6244c6fc649cf9d4777308c8a66a4a2be88c469958cdf977ee0f4d4caf9ba5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d1410149af13c48408a34b2fa0f4a069c8c9e3afaa0cdb3391061f07cae157b0\"" May 27 17:05:00.259988 containerd[1873]: time="2025-05-27T17:05:00.259962110Z" level=info msg="StartContainer for \"d1410149af13c48408a34b2fa0f4a069c8c9e3afaa0cdb3391061f07cae157b0\"" May 27 17:05:00.261365 containerd[1873]: time="2025-05-27T17:05:00.261318051Z" level=info msg="connecting to shim d1410149af13c48408a34b2fa0f4a069c8c9e3afaa0cdb3391061f07cae157b0" address="unix:///run/containerd/s/837d880339dd91038531aed8370c37d2734e84cef62588f071864a0f27a9f9f9" protocol=ttrpc version=3 May 27 17:05:00.281995 systemd[1]: Started cri-containerd-d1410149af13c48408a34b2fa0f4a069c8c9e3afaa0cdb3391061f07cae157b0.scope - libcontainer container d1410149af13c48408a34b2fa0f4a069c8c9e3afaa0cdb3391061f07cae157b0. May 27 17:05:00.321767 containerd[1873]: time="2025-05-27T17:05:00.321660062Z" level=info msg="StartContainer for \"d1410149af13c48408a34b2fa0f4a069c8c9e3afaa0cdb3391061f07cae157b0\" returns successfully" May 27 17:05:00.906374 kubelet[3388]: E0527 17:05:00.906036 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-plcbq" podUID="57d2f7fd-b442-44f4-9692-0fa48704b404" May 27 17:05:01.490221 systemd[1]: cri-containerd-d1410149af13c48408a34b2fa0f4a069c8c9e3afaa0cdb3391061f07cae157b0.scope: Deactivated successfully. May 27 17:05:01.490475 systemd[1]: cri-containerd-d1410149af13c48408a34b2fa0f4a069c8c9e3afaa0cdb3391061f07cae157b0.scope: Consumed 337ms CPU time, 185.2M memory peak, 165.5M written to disk. May 27 17:05:01.494357 containerd[1873]: time="2025-05-27T17:05:01.494311164Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1410149af13c48408a34b2fa0f4a069c8c9e3afaa0cdb3391061f07cae157b0\" id:\"d1410149af13c48408a34b2fa0f4a069c8c9e3afaa0cdb3391061f07cae157b0\" pid:4068 exited_at:{seconds:1748365501 nanos:493900756}" May 27 17:05:01.494817 containerd[1873]: time="2025-05-27T17:05:01.494378767Z" level=info msg="received exit event container_id:\"d1410149af13c48408a34b2fa0f4a069c8c9e3afaa0cdb3391061f07cae157b0\" id:\"d1410149af13c48408a34b2fa0f4a069c8c9e3afaa0cdb3391061f07cae157b0\" pid:4068 exited_at:{seconds:1748365501 nanos:493900756}" May 27 17:05:01.515966 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1410149af13c48408a34b2fa0f4a069c8c9e3afaa0cdb3391061f07cae157b0-rootfs.mount: Deactivated successfully. May 27 17:05:01.574696 kubelet[3388]: I0527 17:05:01.574654 3388 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 17:05:01.905907 kubelet[3388]: I0527 17:05:01.698691 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf48s\" (UniqueName: \"kubernetes.io/projected/df7177f7-e905-4cb8-a878-b8f83662a825-kube-api-access-nf48s\") pod \"calico-apiserver-7575dbc8d6-2h98x\" (UID: \"df7177f7-e905-4cb8-a878-b8f83662a825\") " pod="calico-apiserver/calico-apiserver-7575dbc8d6-2h98x" May 27 17:05:01.905907 kubelet[3388]: I0527 17:05:01.698732 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4md7\" (UniqueName: \"kubernetes.io/projected/5f5890bf-8759-4c7b-99a7-03be7e49f603-kube-api-access-d4md7\") pod \"calico-kube-controllers-649f86f58-8glzb\" (UID: \"5f5890bf-8759-4c7b-99a7-03be7e49f603\") " pod="calico-system/calico-kube-controllers-649f86f58-8glzb" May 27 17:05:01.905907 kubelet[3388]: I0527 17:05:01.698749 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/df7177f7-e905-4cb8-a878-b8f83662a825-calico-apiserver-certs\") pod \"calico-apiserver-7575dbc8d6-2h98x\" (UID: \"df7177f7-e905-4cb8-a878-b8f83662a825\") " pod="calico-apiserver/calico-apiserver-7575dbc8d6-2h98x" May 27 17:05:01.905907 kubelet[3388]: I0527 17:05:01.698766 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d900715-e430-414e-99e0-47f267553a55-whisker-ca-bundle\") pod \"whisker-76d5d5c954-8hqnp\" (UID: \"1d900715-e430-414e-99e0-47f267553a55\") " pod="calico-system/whisker-76d5d5c954-8hqnp" May 27 17:05:01.905907 kubelet[3388]: I0527 17:05:01.698802 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6qq8\" (UniqueName: \"kubernetes.io/projected/1d900715-e430-414e-99e0-47f267553a55-kube-api-access-m6qq8\") pod \"whisker-76d5d5c954-8hqnp\" (UID: \"1d900715-e430-414e-99e0-47f267553a55\") " pod="calico-system/whisker-76d5d5c954-8hqnp" May 27 17:05:01.623570 systemd[1]: Created slice kubepods-burstable-pod9cc266f6_0fc6_480f_b972_0a23fa0f56cc.slice - libcontainer container kubepods-burstable-pod9cc266f6_0fc6_480f_b972_0a23fa0f56cc.slice. May 27 17:05:01.906151 kubelet[3388]: I0527 17:05:01.698856 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmpss\" (UniqueName: \"kubernetes.io/projected/b3c16600-5e75-4169-8708-58525af73bd4-kube-api-access-zmpss\") pod \"coredns-668d6bf9bc-jj5zj\" (UID: \"b3c16600-5e75-4169-8708-58525af73bd4\") " pod="kube-system/coredns-668d6bf9bc-jj5zj" May 27 17:05:01.906151 kubelet[3388]: I0527 17:05:01.698886 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9cfe86f6-d971-4b5c-8f6a-1a23a4372e2e-calico-apiserver-certs\") pod \"calico-apiserver-7575dbc8d6-bc5nn\" (UID: \"9cfe86f6-d971-4b5c-8f6a-1a23a4372e2e\") " pod="calico-apiserver/calico-apiserver-7575dbc8d6-bc5nn" May 27 17:05:01.906151 kubelet[3388]: I0527 17:05:01.698901 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm2j9\" (UniqueName: \"kubernetes.io/projected/9cfe86f6-d971-4b5c-8f6a-1a23a4372e2e-kube-api-access-vm2j9\") pod \"calico-apiserver-7575dbc8d6-bc5nn\" (UID: \"9cfe86f6-d971-4b5c-8f6a-1a23a4372e2e\") " pod="calico-apiserver/calico-apiserver-7575dbc8d6-bc5nn" May 27 17:05:01.906151 kubelet[3388]: I0527 17:05:01.698957 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a0c508c-f41a-47cd-aff0-78e2be619952-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-sxz5s\" (UID: \"3a0c508c-f41a-47cd-aff0-78e2be619952\") " pod="calico-system/goldmane-78d55f7ddc-sxz5s" May 27 17:05:01.906151 kubelet[3388]: I0527 17:05:01.698970 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cc266f6-0fc6-480f-b972-0a23fa0f56cc-config-volume\") pod \"coredns-668d6bf9bc-w84gq\" (UID: \"9cc266f6-0fc6-480f-b972-0a23fa0f56cc\") " pod="kube-system/coredns-668d6bf9bc-w84gq" May 27 17:05:01.639239 systemd[1]: Created slice kubepods-besteffort-pod5f5890bf_8759_4c7b_99a7_03be7e49f603.slice - libcontainer container kubepods-besteffort-pod5f5890bf_8759_4c7b_99a7_03be7e49f603.slice. May 27 17:05:01.911879 kubelet[3388]: I0527 17:05:01.698984 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pztq\" (UniqueName: \"kubernetes.io/projected/9cc266f6-0fc6-480f-b972-0a23fa0f56cc-kube-api-access-2pztq\") pod \"coredns-668d6bf9bc-w84gq\" (UID: \"9cc266f6-0fc6-480f-b972-0a23fa0f56cc\") " pod="kube-system/coredns-668d6bf9bc-w84gq" May 27 17:05:01.911879 kubelet[3388]: I0527 17:05:01.698998 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f5890bf-8759-4c7b-99a7-03be7e49f603-tigera-ca-bundle\") pod \"calico-kube-controllers-649f86f58-8glzb\" (UID: \"5f5890bf-8759-4c7b-99a7-03be7e49f603\") " pod="calico-system/calico-kube-controllers-649f86f58-8glzb" May 27 17:05:01.911879 kubelet[3388]: I0527 17:05:01.699011 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a0c508c-f41a-47cd-aff0-78e2be619952-config\") pod \"goldmane-78d55f7ddc-sxz5s\" (UID: \"3a0c508c-f41a-47cd-aff0-78e2be619952\") " pod="calico-system/goldmane-78d55f7ddc-sxz5s" May 27 17:05:01.911879 kubelet[3388]: I0527 17:05:01.699024 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3c16600-5e75-4169-8708-58525af73bd4-config-volume\") pod \"coredns-668d6bf9bc-jj5zj\" (UID: \"b3c16600-5e75-4169-8708-58525af73bd4\") " pod="kube-system/coredns-668d6bf9bc-jj5zj" May 27 17:05:01.911879 kubelet[3388]: I0527 17:05:01.699035 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3a0c508c-f41a-47cd-aff0-78e2be619952-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-sxz5s\" (UID: \"3a0c508c-f41a-47cd-aff0-78e2be619952\") " pod="calico-system/goldmane-78d55f7ddc-sxz5s" May 27 17:05:01.650931 systemd[1]: Created slice kubepods-burstable-podb3c16600_5e75_4169_8708_58525af73bd4.slice - libcontainer container kubepods-burstable-podb3c16600_5e75_4169_8708_58525af73bd4.slice. May 27 17:05:01.912112 kubelet[3388]: I0527 17:05:01.699049 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2hdf\" (UniqueName: \"kubernetes.io/projected/3a0c508c-f41a-47cd-aff0-78e2be619952-kube-api-access-w2hdf\") pod \"goldmane-78d55f7ddc-sxz5s\" (UID: \"3a0c508c-f41a-47cd-aff0-78e2be619952\") " pod="calico-system/goldmane-78d55f7ddc-sxz5s" May 27 17:05:01.912112 kubelet[3388]: I0527 17:05:01.699071 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d900715-e430-414e-99e0-47f267553a55-whisker-backend-key-pair\") pod \"whisker-76d5d5c954-8hqnp\" (UID: \"1d900715-e430-414e-99e0-47f267553a55\") " pod="calico-system/whisker-76d5d5c954-8hqnp" May 27 17:05:01.658916 systemd[1]: Created slice kubepods-besteffort-pod9cfe86f6_d971_4b5c_8f6a_1a23a4372e2e.slice - libcontainer container kubepods-besteffort-pod9cfe86f6_d971_4b5c_8f6a_1a23a4372e2e.slice. May 27 17:05:01.663918 systemd[1]: Created slice kubepods-besteffort-pod3a0c508c_f41a_47cd_aff0_78e2be619952.slice - libcontainer container kubepods-besteffort-pod3a0c508c_f41a_47cd_aff0_78e2be619952.slice. May 27 17:05:01.672951 systemd[1]: Created slice kubepods-besteffort-poddf7177f7_e905_4cb8_a878_b8f83662a825.slice - libcontainer container kubepods-besteffort-poddf7177f7_e905_4cb8_a878_b8f83662a825.slice. May 27 17:05:01.679757 systemd[1]: Created slice kubepods-besteffort-pod1d900715_e430_414e_99e0_47f267553a55.slice - libcontainer container kubepods-besteffort-pod1d900715_e430_414e_99e0_47f267553a55.slice. May 27 17:05:01.942703 containerd[1873]: time="2025-05-27T17:05:01.942542562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76d5d5c954-8hqnp,Uid:1d900715-e430-414e-99e0-47f267553a55,Namespace:calico-system,Attempt:0,}" May 27 17:05:01.998398 containerd[1873]: time="2025-05-27T17:05:01.998351654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jj5zj,Uid:b3c16600-5e75-4169-8708-58525af73bd4,Namespace:kube-system,Attempt:0,}" May 27 17:05:02.215888 containerd[1873]: time="2025-05-27T17:05:02.215749464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7575dbc8d6-2h98x,Uid:df7177f7-e905-4cb8-a878-b8f83662a825,Namespace:calico-apiserver,Attempt:0,}" May 27 17:05:02.217512 containerd[1873]: time="2025-05-27T17:05:02.217439993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-sxz5s,Uid:3a0c508c-f41a-47cd-aff0-78e2be619952,Namespace:calico-system,Attempt:0,}" May 27 17:05:02.219012 containerd[1873]: time="2025-05-27T17:05:02.218963045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7575dbc8d6-bc5nn,Uid:9cfe86f6-d971-4b5c-8f6a-1a23a4372e2e,Namespace:calico-apiserver,Attempt:0,}" May 27 17:05:02.226931 containerd[1873]: time="2025-05-27T17:05:02.226853000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w84gq,Uid:9cc266f6-0fc6-480f-b972-0a23fa0f56cc,Namespace:kube-system,Attempt:0,}" May 27 17:05:02.229575 containerd[1873]: time="2025-05-27T17:05:02.229539072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-649f86f58-8glzb,Uid:5f5890bf-8759-4c7b-99a7-03be7e49f603,Namespace:calico-system,Attempt:0,}" May 27 17:05:02.489599 containerd[1873]: time="2025-05-27T17:05:02.489278889Z" level=error msg="Failed to destroy network for sandbox \"07b760e9950fb43e5bf1c7b8227c8ec5476350139d4d29733d35e1dee1319cbd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.506364 containerd[1873]: time="2025-05-27T17:05:02.506307152Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76d5d5c954-8hqnp,Uid:1d900715-e430-414e-99e0-47f267553a55,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"07b760e9950fb43e5bf1c7b8227c8ec5476350139d4d29733d35e1dee1319cbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.507196 kubelet[3388]: E0527 17:05:02.507050 3388 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07b760e9950fb43e5bf1c7b8227c8ec5476350139d4d29733d35e1dee1319cbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.507196 kubelet[3388]: E0527 17:05:02.507142 3388 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07b760e9950fb43e5bf1c7b8227c8ec5476350139d4d29733d35e1dee1319cbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-76d5d5c954-8hqnp" May 27 17:05:02.507196 kubelet[3388]: E0527 17:05:02.507161 3388 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07b760e9950fb43e5bf1c7b8227c8ec5476350139d4d29733d35e1dee1319cbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-76d5d5c954-8hqnp" May 27 17:05:02.507336 kubelet[3388]: E0527 17:05:02.507309 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-76d5d5c954-8hqnp_calico-system(1d900715-e430-414e-99e0-47f267553a55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-76d5d5c954-8hqnp_calico-system(1d900715-e430-414e-99e0-47f267553a55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07b760e9950fb43e5bf1c7b8227c8ec5476350139d4d29733d35e1dee1319cbd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-76d5d5c954-8hqnp" podUID="1d900715-e430-414e-99e0-47f267553a55" May 27 17:05:02.566127 containerd[1873]: time="2025-05-27T17:05:02.566020075Z" level=error msg="Failed to destroy network for sandbox \"091cac38f2ff4110613104451899ffc507fb20118780205257b78eee451c3b30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.567765 systemd[1]: run-netns-cni\x2deee43774\x2d3214\x2d3255\x2d5e0d\x2d04bf6a489865.mount: Deactivated successfully. May 27 17:05:02.574674 containerd[1873]: time="2025-05-27T17:05:02.574604905Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jj5zj,Uid:b3c16600-5e75-4169-8708-58525af73bd4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"091cac38f2ff4110613104451899ffc507fb20118780205257b78eee451c3b30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.575176 kubelet[3388]: E0527 17:05:02.575128 3388 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"091cac38f2ff4110613104451899ffc507fb20118780205257b78eee451c3b30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.575353 kubelet[3388]: E0527 17:05:02.575289 3388 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"091cac38f2ff4110613104451899ffc507fb20118780205257b78eee451c3b30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jj5zj" May 27 17:05:02.575353 kubelet[3388]: E0527 17:05:02.575322 3388 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"091cac38f2ff4110613104451899ffc507fb20118780205257b78eee451c3b30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jj5zj" May 27 17:05:02.575483 kubelet[3388]: E0527 17:05:02.575452 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jj5zj_kube-system(b3c16600-5e75-4169-8708-58525af73bd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jj5zj_kube-system(b3c16600-5e75-4169-8708-58525af73bd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"091cac38f2ff4110613104451899ffc507fb20118780205257b78eee451c3b30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jj5zj" podUID="b3c16600-5e75-4169-8708-58525af73bd4" May 27 17:05:02.616639 containerd[1873]: time="2025-05-27T17:05:02.616593290Z" level=error msg="Failed to destroy network for sandbox \"d9d3d373b5bd3277fde1a619b705c2097e3211657bc09d920b6fc56404097e82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.618253 containerd[1873]: time="2025-05-27T17:05:02.618215841Z" level=error msg="Failed to destroy network for sandbox \"37f77f3c89064990bfad54ddf7f38587aeea2b27c4b786b69c24c77bde1cedff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.619907 systemd[1]: run-netns-cni\x2daec5ad5e\x2d3d2b\x2d2a98\x2d268a\x2d0d0065ad7577.mount: Deactivated successfully. May 27 17:05:02.619997 systemd[1]: run-netns-cni\x2d893c6933\x2d79ea\x2ddb3a\x2d44fa\x2daa0ea95ef41d.mount: Deactivated successfully. May 27 17:05:02.620596 containerd[1873]: time="2025-05-27T17:05:02.620493762Z" level=error msg="Failed to destroy network for sandbox \"c91644951eff4d59481c70fe4904ab689190bad77fba27c418786dce9d8be85f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.622736 systemd[1]: run-netns-cni\x2d55cb6a12\x2de47d\x2d551f\x2df1c0\x2d4daaad4d13fb.mount: Deactivated successfully. May 27 17:05:02.623043 containerd[1873]: time="2025-05-27T17:05:02.622231438Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-sxz5s,Uid:3a0c508c-f41a-47cd-aff0-78e2be619952,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9d3d373b5bd3277fde1a619b705c2097e3211657bc09d920b6fc56404097e82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.623766 containerd[1873]: time="2025-05-27T17:05:02.623417956Z" level=error msg="Failed to destroy network for sandbox \"ccbe7c7bb85266a9eac0d866daa4f9db92a071397375f35207bf0c1d1da5589e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.624442 kubelet[3388]: E0527 17:05:02.624393 3388 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9d3d373b5bd3277fde1a619b705c2097e3211657bc09d920b6fc56404097e82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.624522 kubelet[3388]: E0527 17:05:02.624466 3388 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9d3d373b5bd3277fde1a619b705c2097e3211657bc09d920b6fc56404097e82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-sxz5s" May 27 17:05:02.624522 kubelet[3388]: E0527 17:05:02.624483 3388 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9d3d373b5bd3277fde1a619b705c2097e3211657bc09d920b6fc56404097e82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-sxz5s" May 27 17:05:02.624559 kubelet[3388]: E0527 17:05:02.624529 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-sxz5s_calico-system(3a0c508c-f41a-47cd-aff0-78e2be619952)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-sxz5s_calico-system(3a0c508c-f41a-47cd-aff0-78e2be619952)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9d3d373b5bd3277fde1a619b705c2097e3211657bc09d920b6fc56404097e82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-sxz5s" podUID="3a0c508c-f41a-47cd-aff0-78e2be619952" May 27 17:05:02.625774 containerd[1873]: time="2025-05-27T17:05:02.625675444Z" level=error msg="Failed to destroy network for sandbox \"2e0c5b2de62cc1545ab3b001ea61927c98e63ac7c5e28321aaef0034d8796b29\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.626478 systemd[1]: run-netns-cni\x2d25a7ffe3\x2dca99\x2db428\x2de03e\x2d8d8997a22444.mount: Deactivated successfully. May 27 17:05:02.636347 containerd[1873]: time="2025-05-27T17:05:02.636277672Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7575dbc8d6-2h98x,Uid:df7177f7-e905-4cb8-a878-b8f83662a825,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37f77f3c89064990bfad54ddf7f38587aeea2b27c4b786b69c24c77bde1cedff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.636663 kubelet[3388]: E0527 17:05:02.636626 3388 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37f77f3c89064990bfad54ddf7f38587aeea2b27c4b786b69c24c77bde1cedff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.636771 kubelet[3388]: E0527 17:05:02.636757 3388 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37f77f3c89064990bfad54ddf7f38587aeea2b27c4b786b69c24c77bde1cedff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7575dbc8d6-2h98x" May 27 17:05:02.637214 kubelet[3388]: E0527 17:05:02.636882 3388 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37f77f3c89064990bfad54ddf7f38587aeea2b27c4b786b69c24c77bde1cedff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7575dbc8d6-2h98x" May 27 17:05:02.637214 kubelet[3388]: E0527 17:05:02.636947 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7575dbc8d6-2h98x_calico-apiserver(df7177f7-e905-4cb8-a878-b8f83662a825)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7575dbc8d6-2h98x_calico-apiserver(df7177f7-e905-4cb8-a878-b8f83662a825)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37f77f3c89064990bfad54ddf7f38587aeea2b27c4b786b69c24c77bde1cedff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7575dbc8d6-2h98x" podUID="df7177f7-e905-4cb8-a878-b8f83662a825" May 27 17:05:02.644904 containerd[1873]: time="2025-05-27T17:05:02.644854790Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-649f86f58-8glzb,Uid:5f5890bf-8759-4c7b-99a7-03be7e49f603,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c91644951eff4d59481c70fe4904ab689190bad77fba27c418786dce9d8be85f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.645480 kubelet[3388]: E0527 17:05:02.645333 3388 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c91644951eff4d59481c70fe4904ab689190bad77fba27c418786dce9d8be85f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.645480 kubelet[3388]: E0527 17:05:02.645387 3388 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c91644951eff4d59481c70fe4904ab689190bad77fba27c418786dce9d8be85f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-649f86f58-8glzb" May 27 17:05:02.645480 kubelet[3388]: E0527 17:05:02.645403 3388 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c91644951eff4d59481c70fe4904ab689190bad77fba27c418786dce9d8be85f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-649f86f58-8glzb" May 27 17:05:02.645753 kubelet[3388]: E0527 17:05:02.645440 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-649f86f58-8glzb_calico-system(5f5890bf-8759-4c7b-99a7-03be7e49f603)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-649f86f58-8glzb_calico-system(5f5890bf-8759-4c7b-99a7-03be7e49f603)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c91644951eff4d59481c70fe4904ab689190bad77fba27c418786dce9d8be85f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-649f86f58-8glzb" podUID="5f5890bf-8759-4c7b-99a7-03be7e49f603" May 27 17:05:02.653782 containerd[1873]: time="2025-05-27T17:05:02.653701198Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7575dbc8d6-bc5nn,Uid:9cfe86f6-d971-4b5c-8f6a-1a23a4372e2e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccbe7c7bb85266a9eac0d866daa4f9db92a071397375f35207bf0c1d1da5589e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.654122 kubelet[3388]: E0527 17:05:02.654090 3388 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccbe7c7bb85266a9eac0d866daa4f9db92a071397375f35207bf0c1d1da5589e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.654259 kubelet[3388]: E0527 17:05:02.654243 3388 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccbe7c7bb85266a9eac0d866daa4f9db92a071397375f35207bf0c1d1da5589e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7575dbc8d6-bc5nn" May 27 17:05:02.654394 kubelet[3388]: E0527 17:05:02.654308 3388 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccbe7c7bb85266a9eac0d866daa4f9db92a071397375f35207bf0c1d1da5589e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7575dbc8d6-bc5nn" May 27 17:05:02.654464 kubelet[3388]: E0527 17:05:02.654354 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7575dbc8d6-bc5nn_calico-apiserver(9cfe86f6-d971-4b5c-8f6a-1a23a4372e2e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7575dbc8d6-bc5nn_calico-apiserver(9cfe86f6-d971-4b5c-8f6a-1a23a4372e2e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ccbe7c7bb85266a9eac0d866daa4f9db92a071397375f35207bf0c1d1da5589e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7575dbc8d6-bc5nn" podUID="9cfe86f6-d971-4b5c-8f6a-1a23a4372e2e" May 27 17:05:02.657549 containerd[1873]: time="2025-05-27T17:05:02.657387454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w84gq,Uid:9cc266f6-0fc6-480f-b972-0a23fa0f56cc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e0c5b2de62cc1545ab3b001ea61927c98e63ac7c5e28321aaef0034d8796b29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.657847 kubelet[3388]: E0527 17:05:02.657788 3388 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e0c5b2de62cc1545ab3b001ea61927c98e63ac7c5e28321aaef0034d8796b29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.657945 kubelet[3388]: E0527 17:05:02.657854 3388 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e0c5b2de62cc1545ab3b001ea61927c98e63ac7c5e28321aaef0034d8796b29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-w84gq" May 27 17:05:02.657945 kubelet[3388]: E0527 17:05:02.657876 3388 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e0c5b2de62cc1545ab3b001ea61927c98e63ac7c5e28321aaef0034d8796b29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-w84gq" May 27 17:05:02.657945 kubelet[3388]: E0527 17:05:02.657911 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-w84gq_kube-system(9cc266f6-0fc6-480f-b972-0a23fa0f56cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-w84gq_kube-system(9cc266f6-0fc6-480f-b972-0a23fa0f56cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e0c5b2de62cc1545ab3b001ea61927c98e63ac7c5e28321aaef0034d8796b29\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-w84gq" podUID="9cc266f6-0fc6-480f-b972-0a23fa0f56cc" May 27 17:05:02.912126 systemd[1]: Created slice kubepods-besteffort-pod57d2f7fd_b442_44f4_9692_0fa48704b404.slice - libcontainer container kubepods-besteffort-pod57d2f7fd_b442_44f4_9692_0fa48704b404.slice. May 27 17:05:02.914849 containerd[1873]: time="2025-05-27T17:05:02.914737929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-plcbq,Uid:57d2f7fd-b442-44f4-9692-0fa48704b404,Namespace:calico-system,Attempt:0,}" May 27 17:05:02.960794 containerd[1873]: time="2025-05-27T17:05:02.960700301Z" level=error msg="Failed to destroy network for sandbox \"2a76f7c63297f83ee8bb7550a3306221b8940aa93e7ceb439c32cd186d7a62d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.965217 containerd[1873]: time="2025-05-27T17:05:02.965126834Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-plcbq,Uid:57d2f7fd-b442-44f4-9692-0fa48704b404,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a76f7c63297f83ee8bb7550a3306221b8940aa93e7ceb439c32cd186d7a62d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.965937 kubelet[3388]: E0527 17:05:02.965669 3388 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a76f7c63297f83ee8bb7550a3306221b8940aa93e7ceb439c32cd186d7a62d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:02.965937 kubelet[3388]: E0527 17:05:02.965726 3388 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a76f7c63297f83ee8bb7550a3306221b8940aa93e7ceb439c32cd186d7a62d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-plcbq" May 27 17:05:02.965937 kubelet[3388]: E0527 17:05:02.965741 3388 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a76f7c63297f83ee8bb7550a3306221b8940aa93e7ceb439c32cd186d7a62d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-plcbq" May 27 17:05:02.966496 kubelet[3388]: E0527 17:05:02.965781 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-plcbq_calico-system(57d2f7fd-b442-44f4-9692-0fa48704b404)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-plcbq_calico-system(57d2f7fd-b442-44f4-9692-0fa48704b404)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a76f7c63297f83ee8bb7550a3306221b8940aa93e7ceb439c32cd186d7a62d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-plcbq" podUID="57d2f7fd-b442-44f4-9692-0fa48704b404" May 27 17:05:03.017013 containerd[1873]: time="2025-05-27T17:05:03.016956794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 17:05:03.515039 systemd[1]: run-netns-cni\x2dcdb0edc4\x2dfec1\x2d5d2d\x2d6844\x2d21d7dd5a7d2d.mount: Deactivated successfully. May 27 17:05:08.974292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4167329557.mount: Deactivated successfully. May 27 17:05:09.235772 containerd[1873]: time="2025-05-27T17:05:09.235484177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:09.237823 containerd[1873]: time="2025-05-27T17:05:09.237776008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=150465379" May 27 17:05:09.243354 containerd[1873]: time="2025-05-27T17:05:09.243279648Z" level=info msg="ImageCreate event name:\"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:09.248253 containerd[1873]: time="2025-05-27T17:05:09.248185706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:09.248794 containerd[1873]: time="2025-05-27T17:05:09.248530719Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"150465241\" in 6.231528355s" May 27 17:05:09.248794 containerd[1873]: time="2025-05-27T17:05:09.248562728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\"" May 27 17:05:09.261010 containerd[1873]: time="2025-05-27T17:05:09.260968341Z" level=info msg="CreateContainer within sandbox \"ef6244c6fc649cf9d4777308c8a66a4a2be88c469958cdf977ee0f4d4caf9ba5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 17:05:09.298712 containerd[1873]: time="2025-05-27T17:05:09.298619734Z" level=info msg="Container 33655011c3f31a98deafe4f399cfe9042fa15d00a3ff50435630c3e1af08707b: CDI devices from CRI Config.CDIDevices: []" May 27 17:05:09.301075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount165726901.mount: Deactivated successfully. May 27 17:05:09.318280 containerd[1873]: time="2025-05-27T17:05:09.318234868Z" level=info msg="CreateContainer within sandbox \"ef6244c6fc649cf9d4777308c8a66a4a2be88c469958cdf977ee0f4d4caf9ba5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"33655011c3f31a98deafe4f399cfe9042fa15d00a3ff50435630c3e1af08707b\"" May 27 17:05:09.318839 containerd[1873]: time="2025-05-27T17:05:09.318800305Z" level=info msg="StartContainer for \"33655011c3f31a98deafe4f399cfe9042fa15d00a3ff50435630c3e1af08707b\"" May 27 17:05:09.320978 containerd[1873]: time="2025-05-27T17:05:09.320915129Z" level=info msg="connecting to shim 33655011c3f31a98deafe4f399cfe9042fa15d00a3ff50435630c3e1af08707b" address="unix:///run/containerd/s/837d880339dd91038531aed8370c37d2734e84cef62588f071864a0f27a9f9f9" protocol=ttrpc version=3 May 27 17:05:09.343027 systemd[1]: Started cri-containerd-33655011c3f31a98deafe4f399cfe9042fa15d00a3ff50435630c3e1af08707b.scope - libcontainer container 33655011c3f31a98deafe4f399cfe9042fa15d00a3ff50435630c3e1af08707b. May 27 17:05:09.378883 containerd[1873]: time="2025-05-27T17:05:09.378696739Z" level=info msg="StartContainer for \"33655011c3f31a98deafe4f399cfe9042fa15d00a3ff50435630c3e1af08707b\" returns successfully" May 27 17:05:09.917307 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 17:05:09.917439 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 17:05:10.152129 containerd[1873]: time="2025-05-27T17:05:10.152075919Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33655011c3f31a98deafe4f399cfe9042fa15d00a3ff50435630c3e1af08707b\" id:\"ec6a347b12eeb757497319ad4e62f020c5feb65e0d0dd1c23a5b4c642b38b887\" pid:4409 exit_status:1 exited_at:{seconds:1748365510 nanos:151276633}" May 27 17:05:10.153608 kubelet[3388]: I0527 17:05:10.153515 3388 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d900715-e430-414e-99e0-47f267553a55-whisker-ca-bundle\") pod \"1d900715-e430-414e-99e0-47f267553a55\" (UID: \"1d900715-e430-414e-99e0-47f267553a55\") " May 27 17:05:10.154700 kubelet[3388]: I0527 17:05:10.153701 3388 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6qq8\" (UniqueName: \"kubernetes.io/projected/1d900715-e430-414e-99e0-47f267553a55-kube-api-access-m6qq8\") pod \"1d900715-e430-414e-99e0-47f267553a55\" (UID: \"1d900715-e430-414e-99e0-47f267553a55\") " May 27 17:05:10.154700 kubelet[3388]: I0527 17:05:10.153909 3388 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d900715-e430-414e-99e0-47f267553a55-whisker-backend-key-pair\") pod \"1d900715-e430-414e-99e0-47f267553a55\" (UID: \"1d900715-e430-414e-99e0-47f267553a55\") " May 27 17:05:10.160005 kubelet[3388]: I0527 17:05:10.159954 3388 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d900715-e430-414e-99e0-47f267553a55-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1d900715-e430-414e-99e0-47f267553a55" (UID: "1d900715-e430-414e-99e0-47f267553a55"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 17:05:10.161837 systemd[1]: var-lib-kubelet-pods-1d900715\x2de430\x2d414e\x2d99e0\x2d47f267553a55-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 17:05:10.166248 kubelet[3388]: I0527 17:05:10.166120 3388 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d900715-e430-414e-99e0-47f267553a55-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1d900715-e430-414e-99e0-47f267553a55" (UID: "1d900715-e430-414e-99e0-47f267553a55"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 17:05:10.169146 kubelet[3388]: I0527 17:05:10.169027 3388 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d900715-e430-414e-99e0-47f267553a55-kube-api-access-m6qq8" (OuterVolumeSpecName: "kube-api-access-m6qq8") pod "1d900715-e430-414e-99e0-47f267553a55" (UID: "1d900715-e430-414e-99e0-47f267553a55"). InnerVolumeSpecName "kube-api-access-m6qq8". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:05:10.170021 systemd[1]: var-lib-kubelet-pods-1d900715\x2de430\x2d414e\x2d99e0\x2d47f267553a55-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm6qq8.mount: Deactivated successfully. May 27 17:05:10.255111 kubelet[3388]: I0527 17:05:10.255069 3388 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d900715-e430-414e-99e0-47f267553a55-whisker-backend-key-pair\") on node \"ci-4344.0.0-a-f939a1e004\" DevicePath \"\"" May 27 17:05:10.255111 kubelet[3388]: I0527 17:05:10.255105 3388 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d900715-e430-414e-99e0-47f267553a55-whisker-ca-bundle\") on node \"ci-4344.0.0-a-f939a1e004\" DevicePath \"\"" May 27 17:05:10.255111 kubelet[3388]: I0527 17:05:10.255116 3388 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m6qq8\" (UniqueName: \"kubernetes.io/projected/1d900715-e430-414e-99e0-47f267553a55-kube-api-access-m6qq8\") on node \"ci-4344.0.0-a-f939a1e004\" DevicePath \"\"" May 27 17:05:11.040222 systemd[1]: Removed slice kubepods-besteffort-pod1d900715_e430_414e_99e0_47f267553a55.slice - libcontainer container kubepods-besteffort-pod1d900715_e430_414e_99e0_47f267553a55.slice. May 27 17:05:11.060886 kubelet[3388]: I0527 17:05:11.060568 3388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zr4x7" podStartSLOduration=2.679395053 podStartE2EDuration="19.056786419s" podCreationTimestamp="2025-05-27 17:04:52 +0000 UTC" firstStartedPulling="2025-05-27 17:04:52.871798746 +0000 UTC m=+21.087005755" lastFinishedPulling="2025-05-27 17:05:09.249190104 +0000 UTC m=+37.464397121" observedRunningTime="2025-05-27 17:05:10.075667188 +0000 UTC m=+38.290874197" watchObservedRunningTime="2025-05-27 17:05:11.056786419 +0000 UTC m=+39.271993428" May 27 17:05:11.115705 systemd[1]: Created slice kubepods-besteffort-podeed5dbb2_df5f_407e_aeca_9cb6ce8022ee.slice - libcontainer container kubepods-besteffort-podeed5dbb2_df5f_407e_aeca_9cb6ce8022ee.slice. May 27 17:05:11.153226 containerd[1873]: time="2025-05-27T17:05:11.153181146Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33655011c3f31a98deafe4f399cfe9042fa15d00a3ff50435630c3e1af08707b\" id:\"cbf280534082958c14a579073b724b242042765c94a9534a6364d0458748c9d3\" pid:4448 exit_status:1 exited_at:{seconds:1748365511 nanos:152191100}" May 27 17:05:11.160228 kubelet[3388]: I0527 17:05:11.160187 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eed5dbb2-df5f-407e-aeca-9cb6ce8022ee-whisker-backend-key-pair\") pod \"whisker-7949bf5b56-xhxpw\" (UID: \"eed5dbb2-df5f-407e-aeca-9cb6ce8022ee\") " pod="calico-system/whisker-7949bf5b56-xhxpw" May 27 17:05:11.160678 kubelet[3388]: I0527 17:05:11.160634 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glfbf\" (UniqueName: \"kubernetes.io/projected/eed5dbb2-df5f-407e-aeca-9cb6ce8022ee-kube-api-access-glfbf\") pod \"whisker-7949bf5b56-xhxpw\" (UID: \"eed5dbb2-df5f-407e-aeca-9cb6ce8022ee\") " pod="calico-system/whisker-7949bf5b56-xhxpw" May 27 17:05:11.160735 kubelet[3388]: I0527 17:05:11.160709 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eed5dbb2-df5f-407e-aeca-9cb6ce8022ee-whisker-ca-bundle\") pod \"whisker-7949bf5b56-xhxpw\" (UID: \"eed5dbb2-df5f-407e-aeca-9cb6ce8022ee\") " pod="calico-system/whisker-7949bf5b56-xhxpw" May 27 17:05:11.420938 containerd[1873]: time="2025-05-27T17:05:11.420885362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7949bf5b56-xhxpw,Uid:eed5dbb2-df5f-407e-aeca-9cb6ce8022ee,Namespace:calico-system,Attempt:0,}" May 27 17:05:11.591290 systemd-networkd[1595]: cali5e64380bc78: Link UP May 27 17:05:11.592951 systemd-networkd[1595]: cali5e64380bc78: Gained carrier May 27 17:05:11.616210 containerd[1873]: 2025-05-27 17:05:11.472 [INFO][4491] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:05:11.616210 containerd[1873]: 2025-05-27 17:05:11.494 [INFO][4491] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--f939a1e004-k8s-whisker--7949bf5b56--xhxpw-eth0 whisker-7949bf5b56- calico-system eed5dbb2-df5f-407e-aeca-9cb6ce8022ee 868 0 2025-05-27 17:05:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7949bf5b56 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344.0.0-a-f939a1e004 whisker-7949bf5b56-xhxpw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5e64380bc78 [] [] }} ContainerID="c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb" Namespace="calico-system" Pod="whisker-7949bf5b56-xhxpw" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-whisker--7949bf5b56--xhxpw-" May 27 17:05:11.616210 containerd[1873]: 2025-05-27 17:05:11.494 [INFO][4491] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb" Namespace="calico-system" Pod="whisker-7949bf5b56-xhxpw" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-whisker--7949bf5b56--xhxpw-eth0" May 27 17:05:11.616210 containerd[1873]: 2025-05-27 17:05:11.530 [INFO][4557] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb" HandleID="k8s-pod-network.c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb" Workload="ci--4344.0.0--a--f939a1e004-k8s-whisker--7949bf5b56--xhxpw-eth0" May 27 17:05:11.616545 containerd[1873]: 2025-05-27 17:05:11.530 [INFO][4557] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb" HandleID="k8s-pod-network.c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb" Workload="ci--4344.0.0--a--f939a1e004-k8s-whisker--7949bf5b56--xhxpw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a71c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.0.0-a-f939a1e004", "pod":"whisker-7949bf5b56-xhxpw", "timestamp":"2025-05-27 17:05:11.530319743 +0000 UTC"}, Hostname:"ci-4344.0.0-a-f939a1e004", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:05:11.616545 containerd[1873]: 2025-05-27 17:05:11.530 [INFO][4557] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:05:11.616545 containerd[1873]: 2025-05-27 17:05:11.530 [INFO][4557] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:05:11.616545 containerd[1873]: 2025-05-27 17:05:11.530 [INFO][4557] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-f939a1e004' May 27 17:05:11.616545 containerd[1873]: 2025-05-27 17:05:11.536 [INFO][4557] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:11.616545 containerd[1873]: 2025-05-27 17:05:11.542 [INFO][4557] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-f939a1e004" May 27 17:05:11.616545 containerd[1873]: 2025-05-27 17:05:11.546 [INFO][4557] ipam/ipam.go 511: Trying affinity for 192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:11.616545 containerd[1873]: 2025-05-27 17:05:11.549 [INFO][4557] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:11.616545 containerd[1873]: 2025-05-27 17:05:11.552 [INFO][4557] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:11.616680 containerd[1873]: 2025-05-27 17:05:11.552 [INFO][4557] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.192/26 handle="k8s-pod-network.c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:11.616680 containerd[1873]: 2025-05-27 17:05:11.553 [INFO][4557] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb May 27 17:05:11.616680 containerd[1873]: 2025-05-27 17:05:11.559 [INFO][4557] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.192/26 handle="k8s-pod-network.c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:11.616680 containerd[1873]: 2025-05-27 17:05:11.576 [INFO][4557] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.193/26] block=192.168.62.192/26 handle="k8s-pod-network.c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:11.616680 containerd[1873]: 2025-05-27 17:05:11.576 [INFO][4557] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.193/26] handle="k8s-pod-network.c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:11.616680 containerd[1873]: 2025-05-27 17:05:11.576 [INFO][4557] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:05:11.616680 containerd[1873]: 2025-05-27 17:05:11.576 [INFO][4557] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.193/26] IPv6=[] ContainerID="c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb" HandleID="k8s-pod-network.c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb" Workload="ci--4344.0.0--a--f939a1e004-k8s-whisker--7949bf5b56--xhxpw-eth0" May 27 17:05:11.616768 containerd[1873]: 2025-05-27 17:05:11.581 [INFO][4491] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb" Namespace="calico-system" Pod="whisker-7949bf5b56-xhxpw" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-whisker--7949bf5b56--xhxpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--f939a1e004-k8s-whisker--7949bf5b56--xhxpw-eth0", GenerateName:"whisker-7949bf5b56-", Namespace:"calico-system", SelfLink:"", UID:"eed5dbb2-df5f-407e-aeca-9cb6ce8022ee", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 5, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7949bf5b56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-f939a1e004", ContainerID:"", Pod:"whisker-7949bf5b56-xhxpw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.62.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5e64380bc78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:11.616768 containerd[1873]: 2025-05-27 17:05:11.581 [INFO][4491] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.193/32] ContainerID="c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb" Namespace="calico-system" Pod="whisker-7949bf5b56-xhxpw" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-whisker--7949bf5b56--xhxpw-eth0" May 27 17:05:11.616815 containerd[1873]: 2025-05-27 17:05:11.581 [INFO][4491] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e64380bc78 ContainerID="c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb" Namespace="calico-system" Pod="whisker-7949bf5b56-xhxpw" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-whisker--7949bf5b56--xhxpw-eth0" May 27 17:05:11.616815 containerd[1873]: 2025-05-27 17:05:11.591 [INFO][4491] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb" Namespace="calico-system" Pod="whisker-7949bf5b56-xhxpw" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-whisker--7949bf5b56--xhxpw-eth0" May 27 17:05:11.616905 containerd[1873]: 2025-05-27 17:05:11.595 [INFO][4491] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb" Namespace="calico-system" Pod="whisker-7949bf5b56-xhxpw" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-whisker--7949bf5b56--xhxpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--f939a1e004-k8s-whisker--7949bf5b56--xhxpw-eth0", GenerateName:"whisker-7949bf5b56-", Namespace:"calico-system", SelfLink:"", UID:"eed5dbb2-df5f-407e-aeca-9cb6ce8022ee", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 5, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7949bf5b56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-f939a1e004", ContainerID:"c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb", Pod:"whisker-7949bf5b56-xhxpw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.62.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5e64380bc78", MAC:"72:fd:0d:41:9f:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:11.616942 containerd[1873]: 2025-05-27 17:05:11.612 [INFO][4491] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb" Namespace="calico-system" Pod="whisker-7949bf5b56-xhxpw" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-whisker--7949bf5b56--xhxpw-eth0" May 27 17:05:11.684162 containerd[1873]: time="2025-05-27T17:05:11.684040259Z" level=info msg="connecting to shim c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb" address="unix:///run/containerd/s/ecdfdf73383a91b4abef45f0c9fb7725f81cf70e4e6aa1ae3cbf650d581705f1" namespace=k8s.io protocol=ttrpc version=3 May 27 17:05:11.721313 systemd[1]: Started cri-containerd-c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb.scope - libcontainer container c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb. May 27 17:05:11.798996 containerd[1873]: time="2025-05-27T17:05:11.798901940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7949bf5b56-xhxpw,Uid:eed5dbb2-df5f-407e-aeca-9cb6ce8022ee,Namespace:calico-system,Attempt:0,} returns sandbox id \"c57ec8adae3a49141e9595f1ff9275525748b86a1daf9ab9ffbabf0eec6638bb\"" May 27 17:05:11.802465 containerd[1873]: time="2025-05-27T17:05:11.802172023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:05:11.909032 kubelet[3388]: I0527 17:05:11.908994 3388 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d900715-e430-414e-99e0-47f267553a55" path="/var/lib/kubelet/pods/1d900715-e430-414e-99e0-47f267553a55/volumes" May 27 17:05:11.973897 containerd[1873]: time="2025-05-27T17:05:11.973443872Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:05:11.981003 containerd[1873]: time="2025-05-27T17:05:11.980928067Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:05:11.981453 containerd[1873]: time="2025-05-27T17:05:11.980929827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:05:11.981490 kubelet[3388]: E0527 17:05:11.981149 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:05:11.981490 kubelet[3388]: E0527 17:05:11.981198 3388 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:05:11.987188 kubelet[3388]: E0527 17:05:11.987111 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:228cc12306484fbba2733b0b66c92409,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-glfbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7949bf5b56-xhxpw_calico-system(eed5dbb2-df5f-407e-aeca-9cb6ce8022ee): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:05:11.989362 containerd[1873]: time="2025-05-27T17:05:11.989317523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:05:12.139681 systemd-networkd[1595]: vxlan.calico: Link UP May 27 17:05:12.139688 systemd-networkd[1595]: vxlan.calico: Gained carrier May 27 17:05:12.177566 containerd[1873]: time="2025-05-27T17:05:12.177519257Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:05:12.180589 containerd[1873]: time="2025-05-27T17:05:12.180525801Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:05:12.180888 containerd[1873]: time="2025-05-27T17:05:12.180555946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:05:12.180994 kubelet[3388]: E0527 17:05:12.180794 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:05:12.180994 kubelet[3388]: E0527 17:05:12.180976 3388 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:05:12.182385 kubelet[3388]: E0527 17:05:12.181424 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glfbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7949bf5b56-xhxpw_calico-system(eed5dbb2-df5f-407e-aeca-9cb6ce8022ee): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:05:12.182641 kubelet[3388]: E0527 17:05:12.182601 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7949bf5b56-xhxpw" podUID="eed5dbb2-df5f-407e-aeca-9cb6ce8022ee" May 27 17:05:12.907143 containerd[1873]: time="2025-05-27T17:05:12.907098648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-649f86f58-8glzb,Uid:5f5890bf-8759-4c7b-99a7-03be7e49f603,Namespace:calico-system,Attempt:0,}" May 27 17:05:13.013472 systemd-networkd[1595]: cali949b3ca8cf8: Link UP May 27 17:05:13.014474 systemd-networkd[1595]: cali949b3ca8cf8: Gained carrier May 27 17:05:13.029619 containerd[1873]: 2025-05-27 17:05:12.951 [INFO][4723] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--f939a1e004-k8s-calico--kube--controllers--649f86f58--8glzb-eth0 calico-kube-controllers-649f86f58- calico-system 5f5890bf-8759-4c7b-99a7-03be7e49f603 793 0 2025-05-27 17:04:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:649f86f58 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344.0.0-a-f939a1e004 calico-kube-controllers-649f86f58-8glzb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali949b3ca8cf8 [] [] }} ContainerID="197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35" Namespace="calico-system" Pod="calico-kube-controllers-649f86f58-8glzb" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--kube--controllers--649f86f58--8glzb-" May 27 17:05:13.029619 containerd[1873]: 2025-05-27 17:05:12.951 [INFO][4723] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35" Namespace="calico-system" Pod="calico-kube-controllers-649f86f58-8glzb" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--kube--controllers--649f86f58--8glzb-eth0" May 27 17:05:13.029619 containerd[1873]: 2025-05-27 17:05:12.975 [INFO][4735] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35" HandleID="k8s-pod-network.197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35" Workload="ci--4344.0.0--a--f939a1e004-k8s-calico--kube--controllers--649f86f58--8glzb-eth0" May 27 17:05:13.030143 containerd[1873]: 2025-05-27 17:05:12.975 [INFO][4735] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35" HandleID="k8s-pod-network.197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35" Workload="ci--4344.0.0--a--f939a1e004-k8s-calico--kube--controllers--649f86f58--8glzb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d78b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.0.0-a-f939a1e004", "pod":"calico-kube-controllers-649f86f58-8glzb", "timestamp":"2025-05-27 17:05:12.97517261 +0000 UTC"}, Hostname:"ci-4344.0.0-a-f939a1e004", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:05:13.030143 containerd[1873]: 2025-05-27 17:05:12.975 [INFO][4735] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:05:13.030143 containerd[1873]: 2025-05-27 17:05:12.975 [INFO][4735] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:05:13.030143 containerd[1873]: 2025-05-27 17:05:12.975 [INFO][4735] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-f939a1e004' May 27 17:05:13.030143 containerd[1873]: 2025-05-27 17:05:12.981 [INFO][4735] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:13.030143 containerd[1873]: 2025-05-27 17:05:12.986 [INFO][4735] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-f939a1e004" May 27 17:05:13.030143 containerd[1873]: 2025-05-27 17:05:12.990 [INFO][4735] ipam/ipam.go 511: Trying affinity for 192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:13.030143 containerd[1873]: 2025-05-27 17:05:12.992 [INFO][4735] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:13.030143 containerd[1873]: 2025-05-27 17:05:12.994 [INFO][4735] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:13.030303 containerd[1873]: 2025-05-27 17:05:12.994 [INFO][4735] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.192/26 handle="k8s-pod-network.197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:13.030303 containerd[1873]: 2025-05-27 17:05:12.995 [INFO][4735] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35 May 27 17:05:13.030303 containerd[1873]: 2025-05-27 17:05:13.002 [INFO][4735] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.192/26 handle="k8s-pod-network.197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:13.030303 containerd[1873]: 2025-05-27 17:05:13.008 [INFO][4735] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.194/26] block=192.168.62.192/26 handle="k8s-pod-network.197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:13.030303 containerd[1873]: 2025-05-27 17:05:13.008 [INFO][4735] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.194/26] handle="k8s-pod-network.197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:13.030303 containerd[1873]: 2025-05-27 17:05:13.008 [INFO][4735] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:05:13.030303 containerd[1873]: 2025-05-27 17:05:13.008 [INFO][4735] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.194/26] IPv6=[] ContainerID="197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35" HandleID="k8s-pod-network.197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35" Workload="ci--4344.0.0--a--f939a1e004-k8s-calico--kube--controllers--649f86f58--8glzb-eth0" May 27 17:05:13.030392 containerd[1873]: 2025-05-27 17:05:13.010 [INFO][4723] cni-plugin/k8s.go 418: Populated endpoint ContainerID="197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35" Namespace="calico-system" Pod="calico-kube-controllers-649f86f58-8glzb" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--kube--controllers--649f86f58--8glzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--f939a1e004-k8s-calico--kube--controllers--649f86f58--8glzb-eth0", GenerateName:"calico-kube-controllers-649f86f58-", Namespace:"calico-system", SelfLink:"", UID:"5f5890bf-8759-4c7b-99a7-03be7e49f603", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"649f86f58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-f939a1e004", ContainerID:"", Pod:"calico-kube-controllers-649f86f58-8glzb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali949b3ca8cf8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:13.030428 containerd[1873]: 2025-05-27 17:05:13.010 [INFO][4723] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.194/32] ContainerID="197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35" Namespace="calico-system" Pod="calico-kube-controllers-649f86f58-8glzb" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--kube--controllers--649f86f58--8glzb-eth0" May 27 17:05:13.030428 containerd[1873]: 2025-05-27 17:05:13.010 [INFO][4723] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali949b3ca8cf8 ContainerID="197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35" Namespace="calico-system" Pod="calico-kube-controllers-649f86f58-8glzb" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--kube--controllers--649f86f58--8glzb-eth0" May 27 17:05:13.030428 containerd[1873]: 2025-05-27 17:05:13.014 [INFO][4723] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35" Namespace="calico-system" Pod="calico-kube-controllers-649f86f58-8glzb" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--kube--controllers--649f86f58--8glzb-eth0" May 27 17:05:13.030467 containerd[1873]: 2025-05-27 17:05:13.015 [INFO][4723] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35" Namespace="calico-system" Pod="calico-kube-controllers-649f86f58-8glzb" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--kube--controllers--649f86f58--8glzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--f939a1e004-k8s-calico--kube--controllers--649f86f58--8glzb-eth0", GenerateName:"calico-kube-controllers-649f86f58-", Namespace:"calico-system", SelfLink:"", UID:"5f5890bf-8759-4c7b-99a7-03be7e49f603", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"649f86f58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-f939a1e004", ContainerID:"197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35", Pod:"calico-kube-controllers-649f86f58-8glzb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali949b3ca8cf8", MAC:"26:7a:7d:b0:c0:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:13.030502 containerd[1873]: 2025-05-27 17:05:13.027 [INFO][4723] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35" Namespace="calico-system" Pod="calico-kube-controllers-649f86f58-8glzb" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--kube--controllers--649f86f58--8glzb-eth0" May 27 17:05:13.032015 systemd-networkd[1595]: cali5e64380bc78: Gained IPv6LL May 27 17:05:13.042918 kubelet[3388]: E0527 17:05:13.042864 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7949bf5b56-xhxpw" podUID="eed5dbb2-df5f-407e-aeca-9cb6ce8022ee" May 27 17:05:13.096309 containerd[1873]: time="2025-05-27T17:05:13.096034854Z" level=info msg="connecting to shim 197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35" address="unix:///run/containerd/s/09ccbc73e85dc5d5671bcf347c758e474e20c94bd59bcf273a7b26de92e8124f" namespace=k8s.io protocol=ttrpc version=3 May 27 17:05:13.123077 systemd[1]: Started cri-containerd-197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35.scope - libcontainer container 197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35. May 27 17:05:13.155184 containerd[1873]: time="2025-05-27T17:05:13.155080800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-649f86f58-8glzb,Uid:5f5890bf-8759-4c7b-99a7-03be7e49f603,Namespace:calico-system,Attempt:0,} returns sandbox id \"197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35\"" May 27 17:05:13.157811 containerd[1873]: time="2025-05-27T17:05:13.157450447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 17:05:13.907387 containerd[1873]: time="2025-05-27T17:05:13.906914437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w84gq,Uid:9cc266f6-0fc6-480f-b972-0a23fa0f56cc,Namespace:kube-system,Attempt:0,}" May 27 17:05:13.907387 containerd[1873]: time="2025-05-27T17:05:13.907316277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-plcbq,Uid:57d2f7fd-b442-44f4-9692-0fa48704b404,Namespace:calico-system,Attempt:0,}" May 27 17:05:14.035045 systemd-networkd[1595]: cali258daddbb70: Link UP May 27 17:05:14.035242 systemd-networkd[1595]: cali258daddbb70: Gained carrier May 27 17:05:14.059886 containerd[1873]: 2025-05-27 17:05:13.959 [INFO][4800] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--w84gq-eth0 coredns-668d6bf9bc- kube-system 9cc266f6-0fc6-480f-b972-0a23fa0f56cc 789 0 2025-05-27 17:04:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.0.0-a-f939a1e004 coredns-668d6bf9bc-w84gq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali258daddbb70 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511" Namespace="kube-system" Pod="coredns-668d6bf9bc-w84gq" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--w84gq-" May 27 17:05:14.059886 containerd[1873]: 2025-05-27 17:05:13.959 [INFO][4800] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511" Namespace="kube-system" Pod="coredns-668d6bf9bc-w84gq" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--w84gq-eth0" May 27 17:05:14.059886 containerd[1873]: 2025-05-27 17:05:13.991 [INFO][4824] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511" HandleID="k8s-pod-network.7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511" Workload="ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--w84gq-eth0" May 27 17:05:14.060239 containerd[1873]: 2025-05-27 17:05:13.991 [INFO][4824] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511" HandleID="k8s-pod-network.7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511" Workload="ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--w84gq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000237050), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.0.0-a-f939a1e004", "pod":"coredns-668d6bf9bc-w84gq", "timestamp":"2025-05-27 17:05:13.991611791 +0000 UTC"}, Hostname:"ci-4344.0.0-a-f939a1e004", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:05:14.060239 containerd[1873]: 2025-05-27 17:05:13.991 [INFO][4824] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:05:14.060239 containerd[1873]: 2025-05-27 17:05:13.992 [INFO][4824] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:05:14.060239 containerd[1873]: 2025-05-27 17:05:13.992 [INFO][4824] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-f939a1e004' May 27 17:05:14.060239 containerd[1873]: 2025-05-27 17:05:13.998 [INFO][4824] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:14.060239 containerd[1873]: 2025-05-27 17:05:14.002 [INFO][4824] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-f939a1e004" May 27 17:05:14.060239 containerd[1873]: 2025-05-27 17:05:14.006 [INFO][4824] ipam/ipam.go 511: Trying affinity for 192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:14.060239 containerd[1873]: 2025-05-27 17:05:14.008 [INFO][4824] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:14.060239 containerd[1873]: 2025-05-27 17:05:14.010 [INFO][4824] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:14.060668 containerd[1873]: 2025-05-27 17:05:14.011 [INFO][4824] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.192/26 handle="k8s-pod-network.7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:14.060668 containerd[1873]: 2025-05-27 17:05:14.012 [INFO][4824] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511 May 27 17:05:14.060668 containerd[1873]: 2025-05-27 17:05:14.018 [INFO][4824] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.192/26 handle="k8s-pod-network.7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:14.060668 containerd[1873]: 2025-05-27 17:05:14.027 [INFO][4824] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.195/26] block=192.168.62.192/26 handle="k8s-pod-network.7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:14.060668 containerd[1873]: 2025-05-27 17:05:14.027 [INFO][4824] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.195/26] handle="k8s-pod-network.7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:14.060668 containerd[1873]: 2025-05-27 17:05:14.027 [INFO][4824] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:05:14.060668 containerd[1873]: 2025-05-27 17:05:14.027 [INFO][4824] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.195/26] IPv6=[] ContainerID="7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511" HandleID="k8s-pod-network.7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511" Workload="ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--w84gq-eth0" May 27 17:05:14.060767 containerd[1873]: 2025-05-27 17:05:14.029 [INFO][4800] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511" Namespace="kube-system" Pod="coredns-668d6bf9bc-w84gq" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--w84gq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--w84gq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9cc266f6-0fc6-480f-b972-0a23fa0f56cc", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-f939a1e004", ContainerID:"", Pod:"coredns-668d6bf9bc-w84gq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali258daddbb70", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:14.060767 containerd[1873]: 2025-05-27 17:05:14.030 [INFO][4800] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.195/32] ContainerID="7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511" Namespace="kube-system" Pod="coredns-668d6bf9bc-w84gq" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--w84gq-eth0" May 27 17:05:14.060767 containerd[1873]: 2025-05-27 17:05:14.030 [INFO][4800] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali258daddbb70 ContainerID="7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511" Namespace="kube-system" Pod="coredns-668d6bf9bc-w84gq" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--w84gq-eth0" May 27 17:05:14.060767 containerd[1873]: 2025-05-27 17:05:14.035 [INFO][4800] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511" Namespace="kube-system" Pod="coredns-668d6bf9bc-w84gq" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--w84gq-eth0" May 27 17:05:14.060767 containerd[1873]: 2025-05-27 17:05:14.035 [INFO][4800] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511" Namespace="kube-system" Pod="coredns-668d6bf9bc-w84gq" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--w84gq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--w84gq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9cc266f6-0fc6-480f-b972-0a23fa0f56cc", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-f939a1e004", ContainerID:"7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511", Pod:"coredns-668d6bf9bc-w84gq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali258daddbb70", MAC:"c6:dd:eb:bd:57:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:14.060767 containerd[1873]: 2025-05-27 17:05:14.056 [INFO][4800] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511" Namespace="kube-system" Pod="coredns-668d6bf9bc-w84gq" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--w84gq-eth0" May 27 17:05:14.119980 systemd-networkd[1595]: vxlan.calico: Gained IPv6LL May 27 17:05:14.121767 containerd[1873]: time="2025-05-27T17:05:14.121726355Z" level=info msg="connecting to shim 7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511" address="unix:///run/containerd/s/da5484d1597e5f4c8a248ec21a757ec4298d2363093f2c84b2b3ee48a42b49fd" namespace=k8s.io protocol=ttrpc version=3 May 27 17:05:14.157990 systemd-networkd[1595]: cali534d692b40d: Link UP May 27 17:05:14.158198 systemd-networkd[1595]: cali534d692b40d: Gained carrier May 27 17:05:14.163419 systemd[1]: Started cri-containerd-7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511.scope - libcontainer container 7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511. May 27 17:05:14.178697 containerd[1873]: 2025-05-27 17:05:13.972 [INFO][4810] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--f939a1e004-k8s-csi--node--driver--plcbq-eth0 csi-node-driver- calico-system 57d2f7fd-b442-44f4-9692-0fa48704b404 661 0 2025-05-27 17:04:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344.0.0-a-f939a1e004 csi-node-driver-plcbq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali534d692b40d [] [] }} ContainerID="907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6" Namespace="calico-system" Pod="csi-node-driver-plcbq" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-csi--node--driver--plcbq-" May 27 17:05:14.178697 containerd[1873]: 2025-05-27 17:05:13.972 [INFO][4810] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6" Namespace="calico-system" Pod="csi-node-driver-plcbq" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-csi--node--driver--plcbq-eth0" May 27 17:05:14.178697 containerd[1873]: 2025-05-27 17:05:13.999 [INFO][4831] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6" HandleID="k8s-pod-network.907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6" Workload="ci--4344.0.0--a--f939a1e004-k8s-csi--node--driver--plcbq-eth0" May 27 17:05:14.178697 containerd[1873]: 2025-05-27 17:05:13.999 [INFO][4831] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6" HandleID="k8s-pod-network.907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6" Workload="ci--4344.0.0--a--f939a1e004-k8s-csi--node--driver--plcbq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d7020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.0.0-a-f939a1e004", "pod":"csi-node-driver-plcbq", "timestamp":"2025-05-27 17:05:13.999320546 +0000 UTC"}, Hostname:"ci-4344.0.0-a-f939a1e004", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:05:14.178697 containerd[1873]: 2025-05-27 17:05:13.999 [INFO][4831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:05:14.178697 containerd[1873]: 2025-05-27 17:05:14.027 [INFO][4831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:05:14.178697 containerd[1873]: 2025-05-27 17:05:14.029 [INFO][4831] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-f939a1e004' May 27 17:05:14.178697 containerd[1873]: 2025-05-27 17:05:14.099 [INFO][4831] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:14.178697 containerd[1873]: 2025-05-27 17:05:14.104 [INFO][4831] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-f939a1e004" May 27 17:05:14.178697 containerd[1873]: 2025-05-27 17:05:14.110 [INFO][4831] ipam/ipam.go 511: Trying affinity for 192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:14.178697 containerd[1873]: 2025-05-27 17:05:14.116 [INFO][4831] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:14.178697 containerd[1873]: 2025-05-27 17:05:14.122 [INFO][4831] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:14.178697 containerd[1873]: 2025-05-27 17:05:14.122 [INFO][4831] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.192/26 handle="k8s-pod-network.907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:14.178697 containerd[1873]: 2025-05-27 17:05:14.124 [INFO][4831] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6 May 27 17:05:14.178697 containerd[1873]: 2025-05-27 17:05:14.133 [INFO][4831] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.192/26 handle="k8s-pod-network.907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:14.178697 containerd[1873]: 2025-05-27 17:05:14.144 [INFO][4831] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.196/26] block=192.168.62.192/26 handle="k8s-pod-network.907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:14.178697 containerd[1873]: 2025-05-27 17:05:14.144 [INFO][4831] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.196/26] handle="k8s-pod-network.907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:14.178697 containerd[1873]: 2025-05-27 17:05:14.144 [INFO][4831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:05:14.178697 containerd[1873]: 2025-05-27 17:05:14.144 [INFO][4831] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.196/26] IPv6=[] ContainerID="907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6" HandleID="k8s-pod-network.907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6" Workload="ci--4344.0.0--a--f939a1e004-k8s-csi--node--driver--plcbq-eth0" May 27 17:05:14.179970 containerd[1873]: 2025-05-27 17:05:14.148 [INFO][4810] cni-plugin/k8s.go 418: Populated endpoint ContainerID="907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6" Namespace="calico-system" Pod="csi-node-driver-plcbq" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-csi--node--driver--plcbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--f939a1e004-k8s-csi--node--driver--plcbq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"57d2f7fd-b442-44f4-9692-0fa48704b404", ResourceVersion:"661", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-f939a1e004", ContainerID:"", Pod:"csi-node-driver-plcbq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali534d692b40d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:14.179970 containerd[1873]: 2025-05-27 17:05:14.148 [INFO][4810] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.196/32] ContainerID="907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6" Namespace="calico-system" Pod="csi-node-driver-plcbq" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-csi--node--driver--plcbq-eth0" May 27 17:05:14.179970 containerd[1873]: 2025-05-27 17:05:14.148 [INFO][4810] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali534d692b40d ContainerID="907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6" Namespace="calico-system" Pod="csi-node-driver-plcbq" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-csi--node--driver--plcbq-eth0" May 27 17:05:14.179970 containerd[1873]: 2025-05-27 17:05:14.160 [INFO][4810] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6" Namespace="calico-system" Pod="csi-node-driver-plcbq" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-csi--node--driver--plcbq-eth0" May 27 17:05:14.179970 containerd[1873]: 2025-05-27 17:05:14.160 [INFO][4810] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6" Namespace="calico-system" Pod="csi-node-driver-plcbq" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-csi--node--driver--plcbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--f939a1e004-k8s-csi--node--driver--plcbq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"57d2f7fd-b442-44f4-9692-0fa48704b404", ResourceVersion:"661", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-f939a1e004", ContainerID:"907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6", Pod:"csi-node-driver-plcbq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali534d692b40d", MAC:"42:9b:21:c1:fb:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:14.179970 containerd[1873]: 2025-05-27 17:05:14.175 [INFO][4810] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6" Namespace="calico-system" Pod="csi-node-driver-plcbq" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-csi--node--driver--plcbq-eth0" May 27 17:05:14.216351 containerd[1873]: time="2025-05-27T17:05:14.216308119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w84gq,Uid:9cc266f6-0fc6-480f-b972-0a23fa0f56cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511\"" May 27 17:05:14.220875 containerd[1873]: time="2025-05-27T17:05:14.220839892Z" level=info msg="CreateContainer within sandbox \"7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:05:14.294371 containerd[1873]: time="2025-05-27T17:05:14.293893613Z" level=info msg="Container 91a292cb4ceb3e3cdeba3019bdd4412e0f6b55ed115febee3a040e12904169d8: CDI devices from CRI Config.CDIDevices: []" May 27 17:05:14.299325 containerd[1873]: time="2025-05-27T17:05:14.299282004Z" level=info msg="connecting to shim 907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6" address="unix:///run/containerd/s/b54ada50836021fc8b267254afd94363c19dd83aaae1dd4112e4efc1e5a162e0" namespace=k8s.io protocol=ttrpc version=3 May 27 17:05:14.309663 containerd[1873]: time="2025-05-27T17:05:14.309623120Z" level=info msg="CreateContainer within sandbox \"7ccba8f9701e05b83eadcabe5d6970ed251d3f9d6a77cd2eca41e92c02abd511\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"91a292cb4ceb3e3cdeba3019bdd4412e0f6b55ed115febee3a040e12904169d8\"" May 27 17:05:14.310947 containerd[1873]: time="2025-05-27T17:05:14.310880266Z" level=info msg="StartContainer for \"91a292cb4ceb3e3cdeba3019bdd4412e0f6b55ed115febee3a040e12904169d8\"" May 27 17:05:14.314524 containerd[1873]: time="2025-05-27T17:05:14.314337188Z" level=info msg="connecting to shim 91a292cb4ceb3e3cdeba3019bdd4412e0f6b55ed115febee3a040e12904169d8" address="unix:///run/containerd/s/da5484d1597e5f4c8a248ec21a757ec4298d2363093f2c84b2b3ee48a42b49fd" protocol=ttrpc version=3 May 27 17:05:14.321157 systemd[1]: Started cri-containerd-907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6.scope - libcontainer container 907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6. May 27 17:05:14.342261 systemd[1]: Started cri-containerd-91a292cb4ceb3e3cdeba3019bdd4412e0f6b55ed115febee3a040e12904169d8.scope - libcontainer container 91a292cb4ceb3e3cdeba3019bdd4412e0f6b55ed115febee3a040e12904169d8. May 27 17:05:14.411522 containerd[1873]: time="2025-05-27T17:05:14.411369865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-plcbq,Uid:57d2f7fd-b442-44f4-9692-0fa48704b404,Namespace:calico-system,Attempt:0,} returns sandbox id \"907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6\"" May 27 17:05:14.413046 containerd[1873]: time="2025-05-27T17:05:14.412962713Z" level=info msg="StartContainer for \"91a292cb4ceb3e3cdeba3019bdd4412e0f6b55ed115febee3a040e12904169d8\" returns successfully" May 27 17:05:14.824249 systemd-networkd[1595]: cali949b3ca8cf8: Gained IPv6LL May 27 17:05:14.907582 containerd[1873]: time="2025-05-27T17:05:14.907530586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7575dbc8d6-bc5nn,Uid:9cfe86f6-d971-4b5c-8f6a-1a23a4372e2e,Namespace:calico-apiserver,Attempt:0,}" May 27 17:05:14.908286 containerd[1873]: time="2025-05-27T17:05:14.907652359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jj5zj,Uid:b3c16600-5e75-4169-8708-58525af73bd4,Namespace:kube-system,Attempt:0,}" May 27 17:05:15.050474 systemd-networkd[1595]: calia10dc1d7298: Link UP May 27 17:05:15.051442 systemd-networkd[1595]: calia10dc1d7298: Gained carrier May 27 17:05:15.080535 containerd[1873]: 2025-05-27 17:05:14.967 [INFO][4983] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--bc5nn-eth0 calico-apiserver-7575dbc8d6- calico-apiserver 9cfe86f6-d971-4b5c-8f6a-1a23a4372e2e 801 0 2025-05-27 17:04:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7575dbc8d6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.0.0-a-f939a1e004 calico-apiserver-7575dbc8d6-bc5nn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia10dc1d7298 [] [] }} ContainerID="862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb" Namespace="calico-apiserver" Pod="calico-apiserver-7575dbc8d6-bc5nn" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--bc5nn-" May 27 17:05:15.080535 containerd[1873]: 2025-05-27 17:05:14.967 [INFO][4983] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb" Namespace="calico-apiserver" Pod="calico-apiserver-7575dbc8d6-bc5nn" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--bc5nn-eth0" May 27 17:05:15.080535 containerd[1873]: 2025-05-27 17:05:14.999 [INFO][5009] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb" HandleID="k8s-pod-network.862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb" Workload="ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--bc5nn-eth0" May 27 17:05:15.080535 containerd[1873]: 2025-05-27 17:05:14.999 [INFO][5009] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb" HandleID="k8s-pod-network.862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb" Workload="ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--bc5nn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d7750), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.0.0-a-f939a1e004", "pod":"calico-apiserver-7575dbc8d6-bc5nn", "timestamp":"2025-05-27 17:05:14.999123255 +0000 UTC"}, Hostname:"ci-4344.0.0-a-f939a1e004", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:05:15.080535 containerd[1873]: 2025-05-27 17:05:14.999 [INFO][5009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:05:15.080535 containerd[1873]: 2025-05-27 17:05:15.000 [INFO][5009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:05:15.080535 containerd[1873]: 2025-05-27 17:05:15.000 [INFO][5009] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-f939a1e004' May 27 17:05:15.080535 containerd[1873]: 2025-05-27 17:05:15.008 [INFO][5009] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:15.080535 containerd[1873]: 2025-05-27 17:05:15.013 [INFO][5009] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-f939a1e004" May 27 17:05:15.080535 containerd[1873]: 2025-05-27 17:05:15.018 [INFO][5009] ipam/ipam.go 511: Trying affinity for 192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:15.080535 containerd[1873]: 2025-05-27 17:05:15.021 [INFO][5009] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:15.080535 containerd[1873]: 2025-05-27 17:05:15.024 [INFO][5009] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:15.080535 containerd[1873]: 2025-05-27 17:05:15.024 [INFO][5009] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.192/26 handle="k8s-pod-network.862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:15.080535 containerd[1873]: 2025-05-27 17:05:15.026 [INFO][5009] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb May 27 17:05:15.080535 containerd[1873]: 2025-05-27 17:05:15.031 [INFO][5009] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.192/26 handle="k8s-pod-network.862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:15.080535 containerd[1873]: 2025-05-27 17:05:15.039 [INFO][5009] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.197/26] block=192.168.62.192/26 handle="k8s-pod-network.862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:15.080535 containerd[1873]: 2025-05-27 17:05:15.039 [INFO][5009] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.197/26] handle="k8s-pod-network.862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:15.080535 containerd[1873]: 2025-05-27 17:05:15.039 [INFO][5009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:05:15.080535 containerd[1873]: 2025-05-27 17:05:15.039 [INFO][5009] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.197/26] IPv6=[] ContainerID="862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb" HandleID="k8s-pod-network.862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb" Workload="ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--bc5nn-eth0" May 27 17:05:15.082058 containerd[1873]: 2025-05-27 17:05:15.042 [INFO][4983] cni-plugin/k8s.go 418: Populated endpoint ContainerID="862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb" Namespace="calico-apiserver" Pod="calico-apiserver-7575dbc8d6-bc5nn" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--bc5nn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--bc5nn-eth0", GenerateName:"calico-apiserver-7575dbc8d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"9cfe86f6-d971-4b5c-8f6a-1a23a4372e2e", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7575dbc8d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-f939a1e004", ContainerID:"", Pod:"calico-apiserver-7575dbc8d6-bc5nn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia10dc1d7298", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:15.082058 containerd[1873]: 2025-05-27 17:05:15.042 [INFO][4983] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.197/32] ContainerID="862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb" Namespace="calico-apiserver" Pod="calico-apiserver-7575dbc8d6-bc5nn" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--bc5nn-eth0" May 27 17:05:15.082058 containerd[1873]: 2025-05-27 17:05:15.042 [INFO][4983] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia10dc1d7298 ContainerID="862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb" Namespace="calico-apiserver" Pod="calico-apiserver-7575dbc8d6-bc5nn" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--bc5nn-eth0" May 27 17:05:15.082058 containerd[1873]: 2025-05-27 17:05:15.052 [INFO][4983] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb" Namespace="calico-apiserver" Pod="calico-apiserver-7575dbc8d6-bc5nn" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--bc5nn-eth0" May 27 17:05:15.082058 containerd[1873]: 2025-05-27 17:05:15.054 [INFO][4983] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb" Namespace="calico-apiserver" Pod="calico-apiserver-7575dbc8d6-bc5nn" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--bc5nn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--bc5nn-eth0", GenerateName:"calico-apiserver-7575dbc8d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"9cfe86f6-d971-4b5c-8f6a-1a23a4372e2e", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7575dbc8d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-f939a1e004", ContainerID:"862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb", Pod:"calico-apiserver-7575dbc8d6-bc5nn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia10dc1d7298", MAC:"0a:cb:9e:7b:e3:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:15.082058 containerd[1873]: 2025-05-27 17:05:15.071 [INFO][4983] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb" Namespace="calico-apiserver" Pod="calico-apiserver-7575dbc8d6-bc5nn" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--bc5nn-eth0" May 27 17:05:15.097730 kubelet[3388]: I0527 17:05:15.097674 3388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-w84gq" podStartSLOduration=36.097656424 podStartE2EDuration="36.097656424s" podCreationTimestamp="2025-05-27 17:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:05:15.09571345 +0000 UTC m=+43.310920539" watchObservedRunningTime="2025-05-27 17:05:15.097656424 +0000 UTC m=+43.312863433" May 27 17:05:15.164313 containerd[1873]: time="2025-05-27T17:05:15.164235935Z" level=info msg="connecting to shim 862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb" address="unix:///run/containerd/s/39413f2dd19c01545a8f8c4498ebd7da33583251ad846bfd315c7734a5ae24f5" namespace=k8s.io protocol=ttrpc version=3 May 27 17:05:15.198094 systemd[1]: Started cri-containerd-862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb.scope - libcontainer container 862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb. May 27 17:05:15.208749 systemd-networkd[1595]: cali1e5af2016d6: Link UP May 27 17:05:15.209927 systemd-networkd[1595]: cali1e5af2016d6: Gained carrier May 27 17:05:15.232791 containerd[1873]: 2025-05-27 17:05:14.969 [INFO][4994] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--jj5zj-eth0 coredns-668d6bf9bc- kube-system b3c16600-5e75-4169-8708-58525af73bd4 800 0 2025-05-27 17:04:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.0.0-a-f939a1e004 coredns-668d6bf9bc-jj5zj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1e5af2016d6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-jj5zj" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--jj5zj-" May 27 17:05:15.232791 containerd[1873]: 2025-05-27 17:05:14.969 [INFO][4994] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-jj5zj" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--jj5zj-eth0" May 27 17:05:15.232791 containerd[1873]: 2025-05-27 17:05:15.005 [INFO][5011] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9" HandleID="k8s-pod-network.77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9" Workload="ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--jj5zj-eth0" May 27 17:05:15.232791 containerd[1873]: 2025-05-27 17:05:15.005 [INFO][5011] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9" HandleID="k8s-pod-network.77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9" Workload="ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--jj5zj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d7940), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.0.0-a-f939a1e004", "pod":"coredns-668d6bf9bc-jj5zj", "timestamp":"2025-05-27 17:05:15.005239122 +0000 UTC"}, Hostname:"ci-4344.0.0-a-f939a1e004", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:05:15.232791 containerd[1873]: 2025-05-27 17:05:15.005 [INFO][5011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:05:15.232791 containerd[1873]: 2025-05-27 17:05:15.039 [INFO][5011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:05:15.232791 containerd[1873]: 2025-05-27 17:05:15.039 [INFO][5011] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-f939a1e004' May 27 17:05:15.232791 containerd[1873]: 2025-05-27 17:05:15.111 [INFO][5011] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:15.232791 containerd[1873]: 2025-05-27 17:05:15.119 [INFO][5011] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-f939a1e004" May 27 17:05:15.232791 containerd[1873]: 2025-05-27 17:05:15.140 [INFO][5011] ipam/ipam.go 511: Trying affinity for 192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:15.232791 containerd[1873]: 2025-05-27 17:05:15.143 [INFO][5011] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:15.232791 containerd[1873]: 2025-05-27 17:05:15.150 [INFO][5011] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:15.232791 containerd[1873]: 2025-05-27 17:05:15.150 [INFO][5011] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.192/26 handle="k8s-pod-network.77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:15.232791 containerd[1873]: 2025-05-27 17:05:15.154 [INFO][5011] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9 May 27 17:05:15.232791 containerd[1873]: 2025-05-27 17:05:15.161 [INFO][5011] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.192/26 handle="k8s-pod-network.77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:15.232791 containerd[1873]: 2025-05-27 17:05:15.180 [INFO][5011] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.198/26] block=192.168.62.192/26 handle="k8s-pod-network.77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:15.232791 containerd[1873]: 2025-05-27 17:05:15.180 [INFO][5011] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.198/26] handle="k8s-pod-network.77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:15.232791 containerd[1873]: 2025-05-27 17:05:15.181 [INFO][5011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:05:15.232791 containerd[1873]: 2025-05-27 17:05:15.181 [INFO][5011] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.198/26] IPv6=[] ContainerID="77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9" HandleID="k8s-pod-network.77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9" Workload="ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--jj5zj-eth0" May 27 17:05:15.233275 containerd[1873]: 2025-05-27 17:05:15.184 [INFO][4994] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-jj5zj" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--jj5zj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--jj5zj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b3c16600-5e75-4169-8708-58525af73bd4", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-f939a1e004", ContainerID:"", Pod:"coredns-668d6bf9bc-jj5zj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e5af2016d6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:15.233275 containerd[1873]: 2025-05-27 17:05:15.186 [INFO][4994] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.198/32] ContainerID="77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-jj5zj" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--jj5zj-eth0" May 27 17:05:15.233275 containerd[1873]: 2025-05-27 17:05:15.186 [INFO][4994] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e5af2016d6 ContainerID="77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-jj5zj" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--jj5zj-eth0" May 27 17:05:15.233275 containerd[1873]: 2025-05-27 17:05:15.209 [INFO][4994] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-jj5zj" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--jj5zj-eth0" May 27 17:05:15.233275 containerd[1873]: 2025-05-27 17:05:15.210 [INFO][4994] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-jj5zj" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--jj5zj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--jj5zj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b3c16600-5e75-4169-8708-58525af73bd4", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-f939a1e004", ContainerID:"77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9", Pod:"coredns-668d6bf9bc-jj5zj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e5af2016d6", MAC:"a6:6a:84:2e:4c:cd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:15.233275 containerd[1873]: 2025-05-27 17:05:15.228 [INFO][4994] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-jj5zj" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-coredns--668d6bf9bc--jj5zj-eth0" May 27 17:05:15.413298 containerd[1873]: time="2025-05-27T17:05:15.413078066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7575dbc8d6-bc5nn,Uid:9cfe86f6-d971-4b5c-8f6a-1a23a4372e2e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb\"" May 27 17:05:15.462363 containerd[1873]: time="2025-05-27T17:05:15.462313261Z" level=info msg="connecting to shim 77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9" address="unix:///run/containerd/s/e6efd41c0d96eb5755b4c6e7a5153672befaef616c40319ee84f24f966c3dea2" namespace=k8s.io protocol=ttrpc version=3 May 27 17:05:15.486186 systemd[1]: Started cri-containerd-77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9.scope - libcontainer container 77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9. May 27 17:05:15.542079 containerd[1873]: time="2025-05-27T17:05:15.542035064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jj5zj,Uid:b3c16600-5e75-4169-8708-58525af73bd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9\"" May 27 17:05:15.546934 containerd[1873]: time="2025-05-27T17:05:15.546874593Z" level=info msg="CreateContainer within sandbox \"77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:05:15.588436 containerd[1873]: time="2025-05-27T17:05:15.588257531Z" level=info msg="Container b463e3d7e9ebb0419cae966d6dbb718829910e1bd805c792e874ed2c33d8c590: CDI devices from CRI Config.CDIDevices: []" May 27 17:05:15.609558 containerd[1873]: time="2025-05-27T17:05:15.609514531Z" level=info msg="CreateContainer within sandbox \"77383f445377e77e1ec635fd83624acb15c7d8c6c6e32eb3f51dc4ab02b383f9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b463e3d7e9ebb0419cae966d6dbb718829910e1bd805c792e874ed2c33d8c590\"" May 27 17:05:15.610624 containerd[1873]: time="2025-05-27T17:05:15.610581845Z" level=info msg="StartContainer for \"b463e3d7e9ebb0419cae966d6dbb718829910e1bd805c792e874ed2c33d8c590\"" May 27 17:05:15.611626 containerd[1873]: time="2025-05-27T17:05:15.611595758Z" level=info msg="connecting to shim b463e3d7e9ebb0419cae966d6dbb718829910e1bd805c792e874ed2c33d8c590" address="unix:///run/containerd/s/e6efd41c0d96eb5755b4c6e7a5153672befaef616c40319ee84f24f966c3dea2" protocol=ttrpc version=3 May 27 17:05:15.639101 systemd[1]: Started cri-containerd-b463e3d7e9ebb0419cae966d6dbb718829910e1bd805c792e874ed2c33d8c590.scope - libcontainer container b463e3d7e9ebb0419cae966d6dbb718829910e1bd805c792e874ed2c33d8c590. May 27 17:05:15.692083 containerd[1873]: time="2025-05-27T17:05:15.691461719Z" level=info msg="StartContainer for \"b463e3d7e9ebb0419cae966d6dbb718829910e1bd805c792e874ed2c33d8c590\" returns successfully" May 27 17:05:15.848057 systemd-networkd[1595]: cali258daddbb70: Gained IPv6LL May 27 17:05:15.906777 containerd[1873]: time="2025-05-27T17:05:15.906547464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7575dbc8d6-2h98x,Uid:df7177f7-e905-4cb8-a878-b8f83662a825,Namespace:calico-apiserver,Attempt:0,}" May 27 17:05:16.096897 kubelet[3388]: I0527 17:05:16.096761 3388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jj5zj" podStartSLOduration=37.096741928 podStartE2EDuration="37.096741928s" podCreationTimestamp="2025-05-27 17:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:05:16.083612628 +0000 UTC m=+44.298819637" watchObservedRunningTime="2025-05-27 17:05:16.096741928 +0000 UTC m=+44.311948937" May 27 17:05:16.104028 systemd-networkd[1595]: cali534d692b40d: Gained IPv6LL May 27 17:05:16.161378 containerd[1873]: time="2025-05-27T17:05:16.161317319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:16.164947 containerd[1873]: time="2025-05-27T17:05:16.164883533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=48045219" May 27 17:05:16.170520 containerd[1873]: time="2025-05-27T17:05:16.170460595Z" level=info msg="ImageCreate event name:\"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:16.182286 containerd[1873]: time="2025-05-27T17:05:16.182173447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:16.182656 containerd[1873]: time="2025-05-27T17:05:16.182620224Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"49414428\" in 3.024206091s" May 27 17:05:16.182656 containerd[1873]: time="2025-05-27T17:05:16.182652138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\"" May 27 17:05:16.185002 containerd[1873]: time="2025-05-27T17:05:16.184961174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 17:05:16.200812 containerd[1873]: time="2025-05-27T17:05:16.200762828Z" level=info msg="CreateContainer within sandbox \"197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 17:05:16.236001 containerd[1873]: time="2025-05-27T17:05:16.235303317Z" level=info msg="Container 1a0e0aacc03b0c73aa876b98bb93fe35c8ddb2a45a194564efb75b142117d183: CDI devices from CRI Config.CDIDevices: []" May 27 17:05:16.255699 systemd-networkd[1595]: cali7e55b89c5eb: Link UP May 27 17:05:16.256636 systemd-networkd[1595]: cali7e55b89c5eb: Gained carrier May 27 17:05:16.265642 containerd[1873]: time="2025-05-27T17:05:16.265581813Z" level=info msg="CreateContainer within sandbox \"197bf4d693009df9690059a61f1ab5b758c8369cfacad9d9faa31a3003652b35\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1a0e0aacc03b0c73aa876b98bb93fe35c8ddb2a45a194564efb75b142117d183\"" May 27 17:05:16.266576 containerd[1873]: time="2025-05-27T17:05:16.266543715Z" level=info msg="StartContainer for \"1a0e0aacc03b0c73aa876b98bb93fe35c8ddb2a45a194564efb75b142117d183\"" May 27 17:05:16.271206 containerd[1873]: time="2025-05-27T17:05:16.271164539Z" level=info msg="connecting to shim 1a0e0aacc03b0c73aa876b98bb93fe35c8ddb2a45a194564efb75b142117d183" address="unix:///run/containerd/s/09ccbc73e85dc5d5671bcf347c758e474e20c94bd59bcf273a7b26de92e8124f" protocol=ttrpc version=3 May 27 17:05:16.272883 containerd[1873]: 2025-05-27 17:05:16.174 [INFO][5177] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--2h98x-eth0 calico-apiserver-7575dbc8d6- calico-apiserver df7177f7-e905-4cb8-a878-b8f83662a825 802 0 2025-05-27 17:04:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7575dbc8d6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.0.0-a-f939a1e004 calico-apiserver-7575dbc8d6-2h98x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7e55b89c5eb [] [] }} ContainerID="970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92" Namespace="calico-apiserver" Pod="calico-apiserver-7575dbc8d6-2h98x" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--2h98x-" May 27 17:05:16.272883 containerd[1873]: 2025-05-27 17:05:16.174 [INFO][5177] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92" Namespace="calico-apiserver" Pod="calico-apiserver-7575dbc8d6-2h98x" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--2h98x-eth0" May 27 17:05:16.272883 containerd[1873]: 2025-05-27 17:05:16.206 [INFO][5189] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92" HandleID="k8s-pod-network.970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92" Workload="ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--2h98x-eth0" May 27 17:05:16.272883 containerd[1873]: 2025-05-27 17:05:16.206 [INFO][5189] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92" HandleID="k8s-pod-network.970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92" Workload="ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--2h98x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d6f40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.0.0-a-f939a1e004", "pod":"calico-apiserver-7575dbc8d6-2h98x", "timestamp":"2025-05-27 17:05:16.206243174 +0000 UTC"}, Hostname:"ci-4344.0.0-a-f939a1e004", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:05:16.272883 containerd[1873]: 2025-05-27 17:05:16.206 [INFO][5189] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:05:16.272883 containerd[1873]: 2025-05-27 17:05:16.206 [INFO][5189] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:05:16.272883 containerd[1873]: 2025-05-27 17:05:16.206 [INFO][5189] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-f939a1e004' May 27 17:05:16.272883 containerd[1873]: 2025-05-27 17:05:16.212 [INFO][5189] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:16.272883 containerd[1873]: 2025-05-27 17:05:16.216 [INFO][5189] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-f939a1e004" May 27 17:05:16.272883 containerd[1873]: 2025-05-27 17:05:16.221 [INFO][5189] ipam/ipam.go 511: Trying affinity for 192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:16.272883 containerd[1873]: 2025-05-27 17:05:16.224 [INFO][5189] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:16.272883 containerd[1873]: 2025-05-27 17:05:16.226 [INFO][5189] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:16.272883 containerd[1873]: 2025-05-27 17:05:16.226 [INFO][5189] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.192/26 handle="k8s-pod-network.970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:16.272883 containerd[1873]: 2025-05-27 17:05:16.229 [INFO][5189] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92 May 27 17:05:16.272883 containerd[1873]: 2025-05-27 17:05:16.238 [INFO][5189] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.192/26 handle="k8s-pod-network.970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:16.272883 containerd[1873]: 2025-05-27 17:05:16.249 [INFO][5189] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.199/26] block=192.168.62.192/26 handle="k8s-pod-network.970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:16.272883 containerd[1873]: 2025-05-27 17:05:16.249 [INFO][5189] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.199/26] handle="k8s-pod-network.970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:16.272883 containerd[1873]: 2025-05-27 17:05:16.249 [INFO][5189] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:05:16.272883 containerd[1873]: 2025-05-27 17:05:16.249 [INFO][5189] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.199/26] IPv6=[] ContainerID="970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92" HandleID="k8s-pod-network.970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92" Workload="ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--2h98x-eth0" May 27 17:05:16.274441 containerd[1873]: 2025-05-27 17:05:16.251 [INFO][5177] cni-plugin/k8s.go 418: Populated endpoint ContainerID="970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92" Namespace="calico-apiserver" Pod="calico-apiserver-7575dbc8d6-2h98x" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--2h98x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--2h98x-eth0", GenerateName:"calico-apiserver-7575dbc8d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"df7177f7-e905-4cb8-a878-b8f83662a825", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7575dbc8d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-f939a1e004", ContainerID:"", Pod:"calico-apiserver-7575dbc8d6-2h98x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7e55b89c5eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:16.274441 containerd[1873]: 2025-05-27 17:05:16.251 [INFO][5177] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.199/32] ContainerID="970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92" Namespace="calico-apiserver" Pod="calico-apiserver-7575dbc8d6-2h98x" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--2h98x-eth0" May 27 17:05:16.274441 containerd[1873]: 2025-05-27 17:05:16.251 [INFO][5177] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e55b89c5eb ContainerID="970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92" Namespace="calico-apiserver" Pod="calico-apiserver-7575dbc8d6-2h98x" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--2h98x-eth0" May 27 17:05:16.274441 containerd[1873]: 2025-05-27 17:05:16.256 [INFO][5177] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92" Namespace="calico-apiserver" Pod="calico-apiserver-7575dbc8d6-2h98x" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--2h98x-eth0" May 27 17:05:16.274441 containerd[1873]: 2025-05-27 17:05:16.258 [INFO][5177] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92" Namespace="calico-apiserver" Pod="calico-apiserver-7575dbc8d6-2h98x" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--2h98x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--2h98x-eth0", GenerateName:"calico-apiserver-7575dbc8d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"df7177f7-e905-4cb8-a878-b8f83662a825", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7575dbc8d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-f939a1e004", ContainerID:"970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92", Pod:"calico-apiserver-7575dbc8d6-2h98x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7e55b89c5eb", MAC:"2e:25:68:74:b9:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:16.274441 containerd[1873]: 2025-05-27 17:05:16.269 [INFO][5177] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92" Namespace="calico-apiserver" Pod="calico-apiserver-7575dbc8d6-2h98x" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-calico--apiserver--7575dbc8d6--2h98x-eth0" May 27 17:05:16.295009 systemd[1]: Started cri-containerd-1a0e0aacc03b0c73aa876b98bb93fe35c8ddb2a45a194564efb75b142117d183.scope - libcontainer container 1a0e0aacc03b0c73aa876b98bb93fe35c8ddb2a45a194564efb75b142117d183. May 27 17:05:16.336196 containerd[1873]: time="2025-05-27T17:05:16.336082872Z" level=info msg="connecting to shim 970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92" address="unix:///run/containerd/s/2ebc5f58b7619792385cd75558c3df96d67e10b8a56fc3465389fc95fcce9914" namespace=k8s.io protocol=ttrpc version=3 May 27 17:05:16.340231 containerd[1873]: time="2025-05-27T17:05:16.340118001Z" level=info msg="StartContainer for \"1a0e0aacc03b0c73aa876b98bb93fe35c8ddb2a45a194564efb75b142117d183\" returns successfully" May 27 17:05:16.368014 systemd[1]: Started cri-containerd-970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92.scope - libcontainer container 970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92. May 27 17:05:16.420186 containerd[1873]: time="2025-05-27T17:05:16.420142568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7575dbc8d6-2h98x,Uid:df7177f7-e905-4cb8-a878-b8f83662a825,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92\"" May 27 17:05:16.552009 systemd-networkd[1595]: calia10dc1d7298: Gained IPv6LL May 27 17:05:16.813042 systemd-networkd[1595]: cali1e5af2016d6: Gained IPv6LL May 27 17:05:16.907302 containerd[1873]: time="2025-05-27T17:05:16.907140531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-sxz5s,Uid:3a0c508c-f41a-47cd-aff0-78e2be619952,Namespace:calico-system,Attempt:0,}" May 27 17:05:17.013991 systemd-networkd[1595]: cali90288782391: Link UP May 27 17:05:17.014805 systemd-networkd[1595]: cali90288782391: Gained carrier May 27 17:05:17.035490 containerd[1873]: 2025-05-27 17:05:16.952 [INFO][5296] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--f939a1e004-k8s-goldmane--78d55f7ddc--sxz5s-eth0 goldmane-78d55f7ddc- calico-system 3a0c508c-f41a-47cd-aff0-78e2be619952 796 0 2025-05-27 17:04:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344.0.0-a-f939a1e004 goldmane-78d55f7ddc-sxz5s eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali90288782391 [] [] }} ContainerID="9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9" Namespace="calico-system" Pod="goldmane-78d55f7ddc-sxz5s" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-goldmane--78d55f7ddc--sxz5s-" May 27 17:05:17.035490 containerd[1873]: 2025-05-27 17:05:16.952 [INFO][5296] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9" Namespace="calico-system" Pod="goldmane-78d55f7ddc-sxz5s" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-goldmane--78d55f7ddc--sxz5s-eth0" May 27 17:05:17.035490 containerd[1873]: 2025-05-27 17:05:16.972 [INFO][5307] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9" HandleID="k8s-pod-network.9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9" Workload="ci--4344.0.0--a--f939a1e004-k8s-goldmane--78d55f7ddc--sxz5s-eth0" May 27 17:05:17.035490 containerd[1873]: 2025-05-27 17:05:16.972 [INFO][5307] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9" HandleID="k8s-pod-network.9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9" Workload="ci--4344.0.0--a--f939a1e004-k8s-goldmane--78d55f7ddc--sxz5s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400022f050), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.0.0-a-f939a1e004", "pod":"goldmane-78d55f7ddc-sxz5s", "timestamp":"2025-05-27 17:05:16.972112786 +0000 UTC"}, Hostname:"ci-4344.0.0-a-f939a1e004", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:05:17.035490 containerd[1873]: 2025-05-27 17:05:16.972 [INFO][5307] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:05:17.035490 containerd[1873]: 2025-05-27 17:05:16.972 [INFO][5307] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:05:17.035490 containerd[1873]: 2025-05-27 17:05:16.972 [INFO][5307] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-f939a1e004' May 27 17:05:17.035490 containerd[1873]: 2025-05-27 17:05:16.979 [INFO][5307] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:17.035490 containerd[1873]: 2025-05-27 17:05:16.984 [INFO][5307] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-f939a1e004" May 27 17:05:17.035490 containerd[1873]: 2025-05-27 17:05:16.988 [INFO][5307] ipam/ipam.go 511: Trying affinity for 192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:17.035490 containerd[1873]: 2025-05-27 17:05:16.990 [INFO][5307] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:17.035490 containerd[1873]: 2025-05-27 17:05:16.992 [INFO][5307] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.192/26 host="ci-4344.0.0-a-f939a1e004" May 27 17:05:17.035490 containerd[1873]: 2025-05-27 17:05:16.992 [INFO][5307] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.192/26 handle="k8s-pod-network.9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:17.035490 containerd[1873]: 2025-05-27 17:05:16.993 [INFO][5307] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9 May 27 17:05:17.035490 containerd[1873]: 2025-05-27 17:05:16.998 [INFO][5307] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.192/26 handle="k8s-pod-network.9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:17.035490 containerd[1873]: 2025-05-27 17:05:17.006 [INFO][5307] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.200/26] block=192.168.62.192/26 handle="k8s-pod-network.9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:17.035490 containerd[1873]: 2025-05-27 17:05:17.006 [INFO][5307] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.200/26] handle="k8s-pod-network.9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9" host="ci-4344.0.0-a-f939a1e004" May 27 17:05:17.035490 containerd[1873]: 2025-05-27 17:05:17.006 [INFO][5307] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:05:17.035490 containerd[1873]: 2025-05-27 17:05:17.006 [INFO][5307] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.200/26] IPv6=[] ContainerID="9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9" HandleID="k8s-pod-network.9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9" Workload="ci--4344.0.0--a--f939a1e004-k8s-goldmane--78d55f7ddc--sxz5s-eth0" May 27 17:05:17.036310 containerd[1873]: 2025-05-27 17:05:17.009 [INFO][5296] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9" Namespace="calico-system" Pod="goldmane-78d55f7ddc-sxz5s" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-goldmane--78d55f7ddc--sxz5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--f939a1e004-k8s-goldmane--78d55f7ddc--sxz5s-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"3a0c508c-f41a-47cd-aff0-78e2be619952", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-f939a1e004", ContainerID:"", Pod:"goldmane-78d55f7ddc-sxz5s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.62.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali90288782391", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:17.036310 containerd[1873]: 2025-05-27 17:05:17.009 [INFO][5296] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.200/32] ContainerID="9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9" Namespace="calico-system" Pod="goldmane-78d55f7ddc-sxz5s" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-goldmane--78d55f7ddc--sxz5s-eth0" May 27 17:05:17.036310 containerd[1873]: 2025-05-27 17:05:17.009 [INFO][5296] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90288782391 ContainerID="9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9" Namespace="calico-system" Pod="goldmane-78d55f7ddc-sxz5s" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-goldmane--78d55f7ddc--sxz5s-eth0" May 27 17:05:17.036310 containerd[1873]: 2025-05-27 17:05:17.016 [INFO][5296] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9" Namespace="calico-system" Pod="goldmane-78d55f7ddc-sxz5s" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-goldmane--78d55f7ddc--sxz5s-eth0" May 27 17:05:17.036310 containerd[1873]: 2025-05-27 17:05:17.017 [INFO][5296] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9" Namespace="calico-system" Pod="goldmane-78d55f7ddc-sxz5s" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-goldmane--78d55f7ddc--sxz5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--f939a1e004-k8s-goldmane--78d55f7ddc--sxz5s-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"3a0c508c-f41a-47cd-aff0-78e2be619952", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-f939a1e004", ContainerID:"9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9", Pod:"goldmane-78d55f7ddc-sxz5s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.62.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali90288782391", MAC:"4a:df:d0:41:2c:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:17.036310 containerd[1873]: 2025-05-27 17:05:17.033 [INFO][5296] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9" Namespace="calico-system" Pod="goldmane-78d55f7ddc-sxz5s" WorkloadEndpoint="ci--4344.0.0--a--f939a1e004-k8s-goldmane--78d55f7ddc--sxz5s-eth0" May 27 17:05:17.113011 containerd[1873]: time="2025-05-27T17:05:17.112963883Z" level=info msg="connecting to shim 9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9" address="unix:///run/containerd/s/6d0b9bae90043ec47b6fbb1978450a2a3b40e538e1bca50ec35ff238393ae5f5" namespace=k8s.io protocol=ttrpc version=3 May 27 17:05:17.129242 containerd[1873]: time="2025-05-27T17:05:17.129203674Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a0e0aacc03b0c73aa876b98bb93fe35c8ddb2a45a194564efb75b142117d183\" id:\"bd34ce33a379b204765311f546019eff7e2a7e810076627906afc64912374a3b\" pid:5336 exited_at:{seconds:1748365517 nanos:128767585}" May 27 17:05:17.147153 systemd[1]: Started cri-containerd-9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9.scope - libcontainer container 9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9. May 27 17:05:17.149324 kubelet[3388]: I0527 17:05:17.149250 3388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-649f86f58-8glzb" podStartSLOduration=22.121079497 podStartE2EDuration="25.149194424s" podCreationTimestamp="2025-05-27 17:04:52 +0000 UTC" firstStartedPulling="2025-05-27 17:05:13.156601789 +0000 UTC m=+41.371808798" lastFinishedPulling="2025-05-27 17:05:16.184716716 +0000 UTC m=+44.399923725" observedRunningTime="2025-05-27 17:05:17.098261977 +0000 UTC m=+45.313468994" watchObservedRunningTime="2025-05-27 17:05:17.149194424 +0000 UTC m=+45.364401433" May 27 17:05:17.192826 containerd[1873]: time="2025-05-27T17:05:17.192778370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-sxz5s,Uid:3a0c508c-f41a-47cd-aff0-78e2be619952,Namespace:calico-system,Attempt:0,} returns sandbox id \"9bc8c8c660d1543d14a0120f59b05be9966c6033f6ac85aa4c194b7869adfec9\"" May 27 17:05:18.087985 systemd-networkd[1595]: cali7e55b89c5eb: Gained IPv6LL May 27 17:05:18.174375 containerd[1873]: time="2025-05-27T17:05:18.173815129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:18.176069 containerd[1873]: time="2025-05-27T17:05:18.176036282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8226240" May 27 17:05:18.179711 containerd[1873]: time="2025-05-27T17:05:18.179679739Z" level=info msg="ImageCreate event name:\"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:18.184331 containerd[1873]: time="2025-05-27T17:05:18.184277515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:18.184799 containerd[1873]: time="2025-05-27T17:05:18.184773446Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"9595481\" in 1.999779415s" May 27 17:05:18.185034 containerd[1873]: time="2025-05-27T17:05:18.184929101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\"" May 27 17:05:18.186171 containerd[1873]: time="2025-05-27T17:05:18.186081811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 17:05:18.187970 containerd[1873]: time="2025-05-27T17:05:18.187937597Z" level=info msg="CreateContainer within sandbox \"907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 17:05:18.216201 containerd[1873]: time="2025-05-27T17:05:18.216157626Z" level=info msg="Container 820045c57a59bf49350718a7eb34b3e173de09c6db7c89e447eeed46abc2735d: CDI devices from CRI Config.CDIDevices: []" May 27 17:05:18.242156 containerd[1873]: time="2025-05-27T17:05:18.242030154Z" level=info msg="CreateContainer within sandbox \"907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"820045c57a59bf49350718a7eb34b3e173de09c6db7c89e447eeed46abc2735d\"" May 27 17:05:18.242979 containerd[1873]: time="2025-05-27T17:05:18.242744950Z" level=info msg="StartContainer for \"820045c57a59bf49350718a7eb34b3e173de09c6db7c89e447eeed46abc2735d\"" May 27 17:05:18.244365 containerd[1873]: time="2025-05-27T17:05:18.244336390Z" level=info msg="connecting to shim 820045c57a59bf49350718a7eb34b3e173de09c6db7c89e447eeed46abc2735d" address="unix:///run/containerd/s/b54ada50836021fc8b267254afd94363c19dd83aaae1dd4112e4efc1e5a162e0" protocol=ttrpc version=3 May 27 17:05:18.266020 systemd[1]: Started cri-containerd-820045c57a59bf49350718a7eb34b3e173de09c6db7c89e447eeed46abc2735d.scope - libcontainer container 820045c57a59bf49350718a7eb34b3e173de09c6db7c89e447eeed46abc2735d. May 27 17:05:18.297750 containerd[1873]: time="2025-05-27T17:05:18.297686653Z" level=info msg="StartContainer for \"820045c57a59bf49350718a7eb34b3e173de09c6db7c89e447eeed46abc2735d\" returns successfully" May 27 17:05:18.728005 systemd-networkd[1595]: cali90288782391: Gained IPv6LL May 27 17:05:21.457748 containerd[1873]: time="2025-05-27T17:05:21.457682300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:21.459951 containerd[1873]: time="2025-05-27T17:05:21.459911443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=44453213" May 27 17:05:21.462716 containerd[1873]: time="2025-05-27T17:05:21.462667046Z" level=info msg="ImageCreate event name:\"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:21.477461 containerd[1873]: time="2025-05-27T17:05:21.477375978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:21.478140 containerd[1873]: time="2025-05-27T17:05:21.477990274Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"45822470\" in 3.291876566s" May 27 17:05:21.478140 containerd[1873]: time="2025-05-27T17:05:21.478027363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 27 17:05:21.479813 containerd[1873]: time="2025-05-27T17:05:21.479727357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 17:05:21.480877 containerd[1873]: time="2025-05-27T17:05:21.480486843Z" level=info msg="CreateContainer within sandbox \"862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 17:05:21.522074 containerd[1873]: time="2025-05-27T17:05:21.522030315Z" level=info msg="Container e094de720d9b5852226e5fbedccd32bface331db01bc8eb8917d5a7d674a51d5: CDI devices from CRI Config.CDIDevices: []" May 27 17:05:21.541130 containerd[1873]: time="2025-05-27T17:05:21.540925594Z" level=info msg="CreateContainer within sandbox \"862dde443a0a81709cea1814d2affc8dae0ee327c08b5111704c944f5b3c65eb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e094de720d9b5852226e5fbedccd32bface331db01bc8eb8917d5a7d674a51d5\"" May 27 17:05:21.542663 containerd[1873]: time="2025-05-27T17:05:21.542031133Z" level=info msg="StartContainer for \"e094de720d9b5852226e5fbedccd32bface331db01bc8eb8917d5a7d674a51d5\"" May 27 17:05:21.544857 containerd[1873]: time="2025-05-27T17:05:21.544784024Z" level=info msg="connecting to shim e094de720d9b5852226e5fbedccd32bface331db01bc8eb8917d5a7d674a51d5" address="unix:///run/containerd/s/39413f2dd19c01545a8f8c4498ebd7da33583251ad846bfd315c7734a5ae24f5" protocol=ttrpc version=3 May 27 17:05:21.565162 systemd[1]: Started cri-containerd-e094de720d9b5852226e5fbedccd32bface331db01bc8eb8917d5a7d674a51d5.scope - libcontainer container e094de720d9b5852226e5fbedccd32bface331db01bc8eb8917d5a7d674a51d5. May 27 17:05:21.605637 containerd[1873]: time="2025-05-27T17:05:21.605422367Z" level=info msg="StartContainer for \"e094de720d9b5852226e5fbedccd32bface331db01bc8eb8917d5a7d674a51d5\" returns successfully" May 27 17:05:21.831047 containerd[1873]: time="2025-05-27T17:05:21.830988079Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:21.835192 containerd[1873]: time="2025-05-27T17:05:21.834833381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 27 17:05:21.836277 containerd[1873]: time="2025-05-27T17:05:21.836241235Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"45822470\" in 356.42657ms" May 27 17:05:21.836277 containerd[1873]: time="2025-05-27T17:05:21.836276781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 27 17:05:21.838221 containerd[1873]: time="2025-05-27T17:05:21.838165646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:05:21.839104 containerd[1873]: time="2025-05-27T17:05:21.838585238Z" level=info msg="CreateContainer within sandbox \"970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 17:05:21.875517 containerd[1873]: time="2025-05-27T17:05:21.875464617Z" level=info msg="Container 2373bea4e618fadb42b3cf91eee20b4a9068a07566d33349aa0354f3f8e52ce9: CDI devices from CRI Config.CDIDevices: []" May 27 17:05:21.879415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3435588994.mount: Deactivated successfully. May 27 17:05:21.904558 containerd[1873]: time="2025-05-27T17:05:21.904509963Z" level=info msg="CreateContainer within sandbox \"970e98eed3ef7bdcb192d1fdf59964d1a05c83be7d773292f22daae808f3db92\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2373bea4e618fadb42b3cf91eee20b4a9068a07566d33349aa0354f3f8e52ce9\"" May 27 17:05:21.905838 containerd[1873]: time="2025-05-27T17:05:21.905369325Z" level=info msg="StartContainer for \"2373bea4e618fadb42b3cf91eee20b4a9068a07566d33349aa0354f3f8e52ce9\"" May 27 17:05:21.909609 containerd[1873]: time="2025-05-27T17:05:21.909261860Z" level=info msg="connecting to shim 2373bea4e618fadb42b3cf91eee20b4a9068a07566d33349aa0354f3f8e52ce9" address="unix:///run/containerd/s/2ebc5f58b7619792385cd75558c3df96d67e10b8a56fc3465389fc95fcce9914" protocol=ttrpc version=3 May 27 17:05:21.940267 systemd[1]: Started cri-containerd-2373bea4e618fadb42b3cf91eee20b4a9068a07566d33349aa0354f3f8e52ce9.scope - libcontainer container 2373bea4e618fadb42b3cf91eee20b4a9068a07566d33349aa0354f3f8e52ce9. May 27 17:05:22.020665 containerd[1873]: time="2025-05-27T17:05:22.020599424Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:05:22.054424 containerd[1873]: time="2025-05-27T17:05:22.054383290Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:05:22.054871 containerd[1873]: time="2025-05-27T17:05:22.054675829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:05:22.055081 kubelet[3388]: E0527 17:05:22.055042 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:05:22.056295 kubelet[3388]: E0527 17:05:22.055094 3388 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:05:22.056588 containerd[1873]: time="2025-05-27T17:05:22.056506205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 17:05:22.057092 containerd[1873]: time="2025-05-27T17:05:22.057052274Z" level=info msg="StartContainer for \"2373bea4e618fadb42b3cf91eee20b4a9068a07566d33349aa0354f3f8e52ce9\" returns successfully" May 27 17:05:22.059522 kubelet[3388]: E0527 17:05:22.059459 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w2hdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-sxz5s_calico-system(3a0c508c-f41a-47cd-aff0-78e2be619952): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:05:22.060762 kubelet[3388]: E0527 17:05:22.060707 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-sxz5s" podUID="3a0c508c-f41a-47cd-aff0-78e2be619952" May 27 17:05:22.115619 kubelet[3388]: E0527 17:05:22.115228 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-sxz5s" podUID="3a0c508c-f41a-47cd-aff0-78e2be619952" May 27 17:05:22.134650 kubelet[3388]: I0527 17:05:22.133266 3388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7575dbc8d6-2h98x" podStartSLOduration=27.718971589 podStartE2EDuration="33.133235374s" podCreationTimestamp="2025-05-27 17:04:49 +0000 UTC" firstStartedPulling="2025-05-27 17:05:16.422726879 +0000 UTC m=+44.637933896" lastFinishedPulling="2025-05-27 17:05:21.836990632 +0000 UTC m=+50.052197681" observedRunningTime="2025-05-27 17:05:22.127318863 +0000 UTC m=+50.342525872" watchObservedRunningTime="2025-05-27 17:05:22.133235374 +0000 UTC m=+50.348442383" May 27 17:05:22.150964 kubelet[3388]: I0527 17:05:22.150902 3388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7575dbc8d6-bc5nn" podStartSLOduration=27.086956668 podStartE2EDuration="33.150879756s" podCreationTimestamp="2025-05-27 17:04:49 +0000 UTC" firstStartedPulling="2025-05-27 17:05:15.415075577 +0000 UTC m=+43.630282586" lastFinishedPulling="2025-05-27 17:05:21.478998665 +0000 UTC m=+49.694205674" observedRunningTime="2025-05-27 17:05:22.149877957 +0000 UTC m=+50.365084982" watchObservedRunningTime="2025-05-27 17:05:22.150879756 +0000 UTC m=+50.366086829" May 27 17:05:23.115247 kubelet[3388]: I0527 17:05:23.115104 3388 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:05:23.116457 kubelet[3388]: I0527 17:05:23.115696 3388 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:05:24.249992 containerd[1873]: time="2025-05-27T17:05:24.249927466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:24.256616 containerd[1873]: time="2025-05-27T17:05:24.256563244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=13749925" May 27 17:05:24.264405 containerd[1873]: time="2025-05-27T17:05:24.264122146Z" level=info msg="ImageCreate event name:\"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:24.271139 containerd[1873]: time="2025-05-27T17:05:24.271069904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:24.273274 containerd[1873]: time="2025-05-27T17:05:24.273147441Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"15119118\" in 2.216605995s" May 27 17:05:24.273274 containerd[1873]: time="2025-05-27T17:05:24.273186050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\"" May 27 17:05:24.274741 containerd[1873]: time="2025-05-27T17:05:24.274418674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:05:24.276603 containerd[1873]: time="2025-05-27T17:05:24.276562742Z" level=info msg="CreateContainer within sandbox \"907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 17:05:24.318246 containerd[1873]: time="2025-05-27T17:05:24.318199490Z" level=info msg="Container 727a60d4f773b6418df3e8ca730e9b39dbe670a31e8baac662ae7ee71d11e452: CDI devices from CRI Config.CDIDevices: []" May 27 17:05:24.340410 containerd[1873]: time="2025-05-27T17:05:24.340358896Z" level=info msg="CreateContainer within sandbox \"907e5f7feb4af5a3ab1308d7066f6e41a567c76fbf9b99f8b529d63ec39be5e6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"727a60d4f773b6418df3e8ca730e9b39dbe670a31e8baac662ae7ee71d11e452\"" May 27 17:05:24.341646 containerd[1873]: time="2025-05-27T17:05:24.341403016Z" level=info msg="StartContainer for \"727a60d4f773b6418df3e8ca730e9b39dbe670a31e8baac662ae7ee71d11e452\"" May 27 17:05:24.343789 containerd[1873]: time="2025-05-27T17:05:24.343760996Z" level=info msg="connecting to shim 727a60d4f773b6418df3e8ca730e9b39dbe670a31e8baac662ae7ee71d11e452" address="unix:///run/containerd/s/b54ada50836021fc8b267254afd94363c19dd83aaae1dd4112e4efc1e5a162e0" protocol=ttrpc version=3 May 27 17:05:24.369160 systemd[1]: Started cri-containerd-727a60d4f773b6418df3e8ca730e9b39dbe670a31e8baac662ae7ee71d11e452.scope - libcontainer container 727a60d4f773b6418df3e8ca730e9b39dbe670a31e8baac662ae7ee71d11e452. May 27 17:05:24.462391 containerd[1873]: time="2025-05-27T17:05:24.462343090Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:05:24.616181 containerd[1873]: time="2025-05-27T17:05:24.616027572Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:05:24.616181 containerd[1873]: time="2025-05-27T17:05:24.616062285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:05:24.618172 kubelet[3388]: E0527 17:05:24.616438 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:05:24.618172 kubelet[3388]: E0527 17:05:24.616485 3388 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:05:24.618172 kubelet[3388]: E0527 17:05:24.616603 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:228cc12306484fbba2733b0b66c92409,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-glfbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7949bf5b56-xhxpw_calico-system(eed5dbb2-df5f-407e-aeca-9cb6ce8022ee): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:05:24.622583 containerd[1873]: time="2025-05-27T17:05:24.622139458Z" level=info msg="StartContainer for \"727a60d4f773b6418df3e8ca730e9b39dbe670a31e8baac662ae7ee71d11e452\" returns successfully" May 27 17:05:24.622583 containerd[1873]: time="2025-05-27T17:05:24.622270567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:05:24.800901 containerd[1873]: time="2025-05-27T17:05:24.800671036Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:05:24.803856 containerd[1873]: time="2025-05-27T17:05:24.803797093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:05:24.804139 containerd[1873]: time="2025-05-27T17:05:24.803835919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:05:24.804238 kubelet[3388]: E0527 17:05:24.804201 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:05:24.804531 kubelet[3388]: E0527 17:05:24.804331 3388 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:05:24.804531 kubelet[3388]: E0527 17:05:24.804464 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glfbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7949bf5b56-xhxpw_calico-system(eed5dbb2-df5f-407e-aeca-9cb6ce8022ee): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:05:24.805700 kubelet[3388]: E0527 17:05:24.805656 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7949bf5b56-xhxpw" podUID="eed5dbb2-df5f-407e-aeca-9cb6ce8022ee" May 27 17:05:24.990703 kubelet[3388]: I0527 17:05:24.990445 3388 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 17:05:24.990703 kubelet[3388]: I0527 17:05:24.990510 3388 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 17:05:35.908682 kubelet[3388]: E0527 17:05:35.908278 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7949bf5b56-xhxpw" podUID="eed5dbb2-df5f-407e-aeca-9cb6ce8022ee" May 27 17:05:35.925923 kubelet[3388]: I0527 17:05:35.925780 3388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-plcbq" podStartSLOduration=34.063728151 podStartE2EDuration="43.924078765s" podCreationTimestamp="2025-05-27 17:04:52 +0000 UTC" firstStartedPulling="2025-05-27 17:05:14.41385698 +0000 UTC m=+42.629063989" lastFinishedPulling="2025-05-27 17:05:24.274207594 +0000 UTC m=+52.489414603" observedRunningTime="2025-05-27 17:05:25.141302656 +0000 UTC m=+53.356509665" watchObservedRunningTime="2025-05-27 17:05:35.924078765 +0000 UTC m=+64.139285782" May 27 17:05:37.909078 containerd[1873]: time="2025-05-27T17:05:37.908800164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:05:38.135351 containerd[1873]: time="2025-05-27T17:05:38.135137174Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:05:38.139291 containerd[1873]: time="2025-05-27T17:05:38.139149844Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:05:38.139291 containerd[1873]: time="2025-05-27T17:05:38.139180101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:05:38.140165 kubelet[3388]: E0527 17:05:38.139578 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:05:38.140165 kubelet[3388]: E0527 17:05:38.139634 3388 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:05:38.140165 kubelet[3388]: E0527 17:05:38.139744 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w2hdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-sxz5s_calico-system(3a0c508c-f41a-47cd-aff0-78e2be619952): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:05:38.141406 kubelet[3388]: E0527 17:05:38.141271 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-sxz5s" podUID="3a0c508c-f41a-47cd-aff0-78e2be619952" May 27 17:05:41.104548 containerd[1873]: time="2025-05-27T17:05:41.104365702Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33655011c3f31a98deafe4f399cfe9042fa15d00a3ff50435630c3e1af08707b\" id:\"c448c964132ff46bac5454732eeb5816fa04163a2e2f18392cf5a33d8cd0549d\" pid:5591 exited_at:{seconds:1748365541 nanos:103881979}" May 27 17:05:44.049587 kubelet[3388]: I0527 17:05:44.049526 3388 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:05:46.916174 containerd[1873]: time="2025-05-27T17:05:46.915930621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:05:47.104968 containerd[1873]: time="2025-05-27T17:05:47.104875510Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:05:47.109059 containerd[1873]: time="2025-05-27T17:05:47.108999683Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:05:47.109391 containerd[1873]: time="2025-05-27T17:05:47.109229435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:05:47.109709 kubelet[3388]: E0527 17:05:47.109676 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:05:47.110585 kubelet[3388]: E0527 17:05:47.110099 3388 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:05:47.110750 kubelet[3388]: E0527 17:05:47.110710 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:228cc12306484fbba2733b0b66c92409,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-glfbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7949bf5b56-xhxpw_calico-system(eed5dbb2-df5f-407e-aeca-9cb6ce8022ee): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:05:47.114626 containerd[1873]: time="2025-05-27T17:05:47.114454612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:05:47.141187 containerd[1873]: time="2025-05-27T17:05:47.141094187Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a0e0aacc03b0c73aa876b98bb93fe35c8ddb2a45a194564efb75b142117d183\" id:\"f6fe2b299863d9b384d8d935a33ed7cc9ec5e7eae46636b32bde17ab8c2e306e\" pid:5619 exited_at:{seconds:1748365547 nanos:138814833}" May 27 17:05:47.275934 containerd[1873]: time="2025-05-27T17:05:47.275190202Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:05:47.278793 containerd[1873]: time="2025-05-27T17:05:47.278667979Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:05:47.278793 containerd[1873]: time="2025-05-27T17:05:47.278707036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:05:47.279229 kubelet[3388]: E0527 17:05:47.279172 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:05:47.279424 kubelet[3388]: E0527 17:05:47.279229 3388 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:05:47.279681 kubelet[3388]: E0527 17:05:47.279633 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glfbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7949bf5b56-xhxpw_calico-system(eed5dbb2-df5f-407e-aeca-9cb6ce8022ee): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:05:47.283832 kubelet[3388]: E0527 17:05:47.283731 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7949bf5b56-xhxpw" podUID="eed5dbb2-df5f-407e-aeca-9cb6ce8022ee" May 27 17:05:48.907324 kubelet[3388]: E0527 17:05:48.907218 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-sxz5s" podUID="3a0c508c-f41a-47cd-aff0-78e2be619952" May 27 17:05:56.533014 kubelet[3388]: I0527 17:05:56.532967 3388 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:05:59.912416 containerd[1873]: time="2025-05-27T17:05:59.912378408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:06:00.094108 containerd[1873]: time="2025-05-27T17:06:00.094045370Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:06:00.097512 containerd[1873]: time="2025-05-27T17:06:00.097448454Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:06:00.098067 containerd[1873]: time="2025-05-27T17:06:00.097458806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:06:00.098107 kubelet[3388]: E0527 17:06:00.097691 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:06:00.098107 kubelet[3388]: E0527 17:06:00.097742 3388 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:06:00.099404 kubelet[3388]: E0527 17:06:00.098917 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w2hdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-sxz5s_calico-system(3a0c508c-f41a-47cd-aff0-78e2be619952): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:06:00.101852 kubelet[3388]: E0527 17:06:00.100842 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-sxz5s" podUID="3a0c508c-f41a-47cd-aff0-78e2be619952" May 27 17:06:02.909904 kubelet[3388]: E0527 17:06:02.909846 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7949bf5b56-xhxpw" podUID="eed5dbb2-df5f-407e-aeca-9cb6ce8022ee" May 27 17:06:05.776905 update_engine[1855]: I20250527 17:06:05.776705 1855 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 27 17:06:05.776905 update_engine[1855]: I20250527 17:06:05.776759 1855 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 27 17:06:05.777861 update_engine[1855]: I20250527 17:06:05.777375 1855 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 27 17:06:05.778470 update_engine[1855]: I20250527 17:06:05.778378 1855 omaha_request_params.cc:62] Current group set to alpha May 27 17:06:05.779266 update_engine[1855]: I20250527 17:06:05.779152 1855 update_attempter.cc:499] Already updated boot flags. Skipping. May 27 17:06:05.779266 update_engine[1855]: I20250527 17:06:05.779175 1855 update_attempter.cc:643] Scheduling an action processor start. May 27 17:06:05.779266 update_engine[1855]: I20250527 17:06:05.779194 1855 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 27 17:06:05.781881 update_engine[1855]: I20250527 17:06:05.781080 1855 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 27 17:06:05.781881 update_engine[1855]: I20250527 17:06:05.781164 1855 omaha_request_action.cc:271] Posting an Omaha request to disabled May 27 17:06:05.781881 update_engine[1855]: I20250527 17:06:05.781169 1855 omaha_request_action.cc:272] Request: May 27 17:06:05.781881 update_engine[1855]: May 27 17:06:05.781881 update_engine[1855]: May 27 17:06:05.781881 update_engine[1855]: May 27 17:06:05.781881 update_engine[1855]: May 27 17:06:05.781881 update_engine[1855]: May 27 17:06:05.781881 update_engine[1855]: May 27 17:06:05.781881 update_engine[1855]: May 27 17:06:05.781881 update_engine[1855]: May 27 17:06:05.781881 update_engine[1855]: I20250527 17:06:05.781174 1855 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 17:06:05.783279 update_engine[1855]: I20250527 17:06:05.783244 1855 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 17:06:05.783756 update_engine[1855]: I20250527 17:06:05.783726 1855 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 17:06:05.787219 locksmithd[1979]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 27 17:06:05.814222 update_engine[1855]: E20250527 17:06:05.814164 1855 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 17:06:05.814453 update_engine[1855]: I20250527 17:06:05.814435 1855 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 27 17:06:10.907786 kubelet[3388]: E0527 17:06:10.907441 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-sxz5s" podUID="3a0c508c-f41a-47cd-aff0-78e2be619952" May 27 17:06:11.115073 containerd[1873]: time="2025-05-27T17:06:11.115016575Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33655011c3f31a98deafe4f399cfe9042fa15d00a3ff50435630c3e1af08707b\" id:\"9b5d0eee50b09920fe5633d0274bb4d7cc8915cd1f352f0834c8b9516954c3c6\" pid:5655 exited_at:{seconds:1748365571 nanos:114607120}" May 27 17:06:14.910346 kubelet[3388]: E0527 17:06:14.910269 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7949bf5b56-xhxpw" podUID="eed5dbb2-df5f-407e-aeca-9cb6ce8022ee" May 27 17:06:15.746951 update_engine[1855]: I20250527 17:06:15.745882 1855 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 17:06:15.746951 update_engine[1855]: I20250527 17:06:15.746137 1855 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 17:06:15.746951 update_engine[1855]: I20250527 17:06:15.746384 1855 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 17:06:15.780318 update_engine[1855]: E20250527 17:06:15.780259 1855 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 17:06:15.780534 update_engine[1855]: I20250527 17:06:15.780503 1855 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 27 17:06:16.980142 containerd[1873]: time="2025-05-27T17:06:16.980102816Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a0e0aacc03b0c73aa876b98bb93fe35c8ddb2a45a194564efb75b142117d183\" id:\"21d43f753b1b47372ca3d05dbe4b41a14e6e55d4752c40029e4d3654fb044a33\" pid:5682 exited_at:{seconds:1748365576 nanos:979737762}" May 27 17:06:17.122093 containerd[1873]: time="2025-05-27T17:06:17.121647872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a0e0aacc03b0c73aa876b98bb93fe35c8ddb2a45a194564efb75b142117d183\" id:\"188e2a401203c6e17d6a3406c234cba4b520375faaa4d44371db6322f7677b4a\" pid:5703 exited_at:{seconds:1748365577 nanos:120998863}" May 27 17:06:21.909957 kubelet[3388]: E0527 17:06:21.909334 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-sxz5s" podUID="3a0c508c-f41a-47cd-aff0-78e2be619952" May 27 17:06:25.746868 update_engine[1855]: I20250527 17:06:25.746738 1855 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 17:06:25.747712 update_engine[1855]: I20250527 17:06:25.747383 1855 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 17:06:25.747712 update_engine[1855]: I20250527 17:06:25.747670 1855 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 17:06:25.816859 update_engine[1855]: E20250527 17:06:25.816773 1855 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 17:06:25.817105 update_engine[1855]: I20250527 17:06:25.817063 1855 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 27 17:06:29.911142 containerd[1873]: time="2025-05-27T17:06:29.911079560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:06:30.131998 containerd[1873]: time="2025-05-27T17:06:30.131791383Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:06:30.135403 containerd[1873]: time="2025-05-27T17:06:30.135274384Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:06:30.135403 containerd[1873]: time="2025-05-27T17:06:30.135291056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:06:30.135691 kubelet[3388]: E0527 17:06:30.135653 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:06:30.136844 kubelet[3388]: E0527 17:06:30.136132 3388 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:06:30.136844 kubelet[3388]: E0527 17:06:30.136255 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:228cc12306484fbba2733b0b66c92409,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-glfbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7949bf5b56-xhxpw_calico-system(eed5dbb2-df5f-407e-aeca-9cb6ce8022ee): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:06:30.139961 containerd[1873]: time="2025-05-27T17:06:30.139609168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:06:30.325273 containerd[1873]: time="2025-05-27T17:06:30.325221242Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:06:30.330492 containerd[1873]: time="2025-05-27T17:06:30.330350064Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:06:30.330492 containerd[1873]: time="2025-05-27T17:06:30.330402986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:06:30.331222 kubelet[3388]: E0527 17:06:30.330955 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:06:30.331222 kubelet[3388]: E0527 17:06:30.331074 3388 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:06:30.331222 kubelet[3388]: E0527 17:06:30.331187 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glfbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7949bf5b56-xhxpw_calico-system(eed5dbb2-df5f-407e-aeca-9cb6ce8022ee): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:06:30.332449 kubelet[3388]: E0527 17:06:30.332315 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7949bf5b56-xhxpw" podUID="eed5dbb2-df5f-407e-aeca-9cb6ce8022ee" May 27 17:06:35.742861 update_engine[1855]: I20250527 17:06:35.742689 1855 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 17:06:35.743845 update_engine[1855]: I20250527 17:06:35.743497 1855 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 17:06:35.744018 update_engine[1855]: I20250527 17:06:35.743930 1855 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 17:06:35.748445 update_engine[1855]: E20250527 17:06:35.747726 1855 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 17:06:35.748445 update_engine[1855]: I20250527 17:06:35.747786 1855 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 27 17:06:35.748445 update_engine[1855]: I20250527 17:06:35.747794 1855 omaha_request_action.cc:617] Omaha request response: May 27 17:06:35.748445 update_engine[1855]: E20250527 17:06:35.747907 1855 omaha_request_action.cc:636] Omaha request network transfer failed. May 27 17:06:35.748445 update_engine[1855]: I20250527 17:06:35.747923 1855 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 27 17:06:35.748445 update_engine[1855]: I20250527 17:06:35.747927 1855 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 17:06:35.748445 update_engine[1855]: I20250527 17:06:35.747931 1855 update_attempter.cc:306] Processing Done. May 27 17:06:35.748445 update_engine[1855]: E20250527 17:06:35.747944 1855 update_attempter.cc:619] Update failed. May 27 17:06:35.748445 update_engine[1855]: I20250527 17:06:35.747947 1855 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 27 17:06:35.748445 update_engine[1855]: I20250527 17:06:35.747951 1855 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 27 17:06:35.748445 update_engine[1855]: I20250527 17:06:35.747956 1855 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 27 17:06:35.748445 update_engine[1855]: I20250527 17:06:35.748023 1855 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 27 17:06:35.748445 update_engine[1855]: I20250527 17:06:35.748041 1855 omaha_request_action.cc:271] Posting an Omaha request to disabled May 27 17:06:35.748445 update_engine[1855]: I20250527 17:06:35.748044 1855 omaha_request_action.cc:272] Request: May 27 17:06:35.748445 update_engine[1855]: May 27 17:06:35.748445 update_engine[1855]: May 27 17:06:35.748748 update_engine[1855]: May 27 17:06:35.748748 update_engine[1855]: May 27 17:06:35.748748 update_engine[1855]: May 27 17:06:35.748748 update_engine[1855]: May 27 17:06:35.748748 update_engine[1855]: I20250527 17:06:35.748049 1855 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 17:06:35.748748 update_engine[1855]: I20250527 17:06:35.748180 1855 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 17:06:35.748748 update_engine[1855]: I20250527 17:06:35.748410 1855 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 17:06:35.749124 locksmithd[1979]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 27 17:06:35.853150 update_engine[1855]: E20250527 17:06:35.853084 1855 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 17:06:35.853479 update_engine[1855]: I20250527 17:06:35.853346 1855 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 27 17:06:35.853479 update_engine[1855]: I20250527 17:06:35.853360 1855 omaha_request_action.cc:617] Omaha request response: May 27 17:06:35.853479 update_engine[1855]: I20250527 17:06:35.853366 1855 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 17:06:35.853479 update_engine[1855]: I20250527 17:06:35.853370 1855 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 17:06:35.853479 update_engine[1855]: I20250527 17:06:35.853374 1855 update_attempter.cc:306] Processing Done. May 27 17:06:35.853479 update_engine[1855]: I20250527 17:06:35.853379 1855 update_attempter.cc:310] Error event sent. May 27 17:06:35.853479 update_engine[1855]: I20250527 17:06:35.853389 1855 update_check_scheduler.cc:74] Next update check in 48m29s May 27 17:06:35.853787 locksmithd[1979]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 27 17:06:36.907963 kubelet[3388]: E0527 17:06:36.907860 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-sxz5s" podUID="3a0c508c-f41a-47cd-aff0-78e2be619952" May 27 17:06:41.102227 containerd[1873]: time="2025-05-27T17:06:41.101580121Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33655011c3f31a98deafe4f399cfe9042fa15d00a3ff50435630c3e1af08707b\" id:\"887acc3742dbd62382d52a8a225138ee5e30ffdbe33186f54f26e6b7b2a04bfb\" pid:5738 exited_at:{seconds:1748365601 nanos:100144957}" May 27 17:06:44.908656 kubelet[3388]: E0527 17:06:44.908465 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7949bf5b56-xhxpw" podUID="eed5dbb2-df5f-407e-aeca-9cb6ce8022ee" May 27 17:06:47.110286 containerd[1873]: time="2025-05-27T17:06:47.110244152Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a0e0aacc03b0c73aa876b98bb93fe35c8ddb2a45a194564efb75b142117d183\" id:\"251e1d8934502a1f2ebc74a22d553ccc419faaafb465a59d8900db4f43421e88\" pid:5785 exited_at:{seconds:1748365607 nanos:110015680}" May 27 17:06:51.910722 containerd[1873]: time="2025-05-27T17:06:51.910676853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:06:52.093774 containerd[1873]: time="2025-05-27T17:06:52.093719720Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:06:52.097012 containerd[1873]: time="2025-05-27T17:06:52.096927342Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:06:52.097012 containerd[1873]: time="2025-05-27T17:06:52.096979304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:06:52.097782 kubelet[3388]: E0527 17:06:52.097145 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:06:52.097782 kubelet[3388]: E0527 17:06:52.097197 3388 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:06:52.097782 kubelet[3388]: E0527 17:06:52.097306 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w2hdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-sxz5s_calico-system(3a0c508c-f41a-47cd-aff0-78e2be619952): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:06:52.098952 kubelet[3388]: E0527 17:06:52.098897 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-sxz5s" podUID="3a0c508c-f41a-47cd-aff0-78e2be619952" May 27 17:06:55.909091 kubelet[3388]: E0527 17:06:55.909040 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7949bf5b56-xhxpw" podUID="eed5dbb2-df5f-407e-aeca-9cb6ce8022ee" May 27 17:07:02.909050 kubelet[3388]: E0527 17:07:02.908999 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-sxz5s" podUID="3a0c508c-f41a-47cd-aff0-78e2be619952" May 27 17:07:09.909552 kubelet[3388]: E0527 17:07:09.909500 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7949bf5b56-xhxpw" podUID="eed5dbb2-df5f-407e-aeca-9cb6ce8022ee" May 27 17:07:11.097417 containerd[1873]: time="2025-05-27T17:07:11.096795876Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33655011c3f31a98deafe4f399cfe9042fa15d00a3ff50435630c3e1af08707b\" id:\"ed11d2471965b2384a0acc9fd00e3aa1b4cb64cb4d2e82d0ac4261c6e6d5fb0f\" pid:5811 exited_at:{seconds:1748365631 nanos:96189286}" May 27 17:07:14.907268 kubelet[3388]: E0527 17:07:14.906911 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-sxz5s" podUID="3a0c508c-f41a-47cd-aff0-78e2be619952" May 27 17:07:16.980288 containerd[1873]: time="2025-05-27T17:07:16.980198293Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a0e0aacc03b0c73aa876b98bb93fe35c8ddb2a45a194564efb75b142117d183\" id:\"081b5aa9d900bb2592e69a3279be09cff3c31333c7bab12576b85143ec8f935b\" pid:5835 exited_at:{seconds:1748365636 nanos:979578238}" May 27 17:07:17.111040 containerd[1873]: time="2025-05-27T17:07:17.110985367Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a0e0aacc03b0c73aa876b98bb93fe35c8ddb2a45a194564efb75b142117d183\" id:\"a03bcfaf4cba7e206a32027babe58ff32229e90b710059e0dd24dd06dd7db7e3\" pid:5856 exited_at:{seconds:1748365637 nanos:110442387}" May 27 17:07:22.909096 kubelet[3388]: E0527 17:07:22.909027 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7949bf5b56-xhxpw" podUID="eed5dbb2-df5f-407e-aeca-9cb6ce8022ee" May 27 17:07:28.907042 kubelet[3388]: E0527 17:07:28.906976 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-sxz5s" podUID="3a0c508c-f41a-47cd-aff0-78e2be619952" May 27 17:07:33.909371 kubelet[3388]: E0527 17:07:33.909300 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7949bf5b56-xhxpw" podUID="eed5dbb2-df5f-407e-aeca-9cb6ce8022ee" May 27 17:07:39.907865 kubelet[3388]: E0527 17:07:39.907754 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-sxz5s" podUID="3a0c508c-f41a-47cd-aff0-78e2be619952" May 27 17:07:41.092323 containerd[1873]: time="2025-05-27T17:07:41.092261729Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33655011c3f31a98deafe4f399cfe9042fa15d00a3ff50435630c3e1af08707b\" id:\"084ed288ef15dc3b6bb8db73231eb85649b7e9ac712e05a6c02509deb6a89188\" pid:5880 exited_at:{seconds:1748365661 nanos:91890835}" May 27 17:07:47.107757 containerd[1873]: time="2025-05-27T17:07:47.107713098Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a0e0aacc03b0c73aa876b98bb93fe35c8ddb2a45a194564efb75b142117d183\" id:\"a1da3eb05b5626c5f2601b7bf6083c434738d646ba69fcaf4d141fe94e93fa76\" pid:5904 exited_at:{seconds:1748365667 nanos:107487205}" May 27 17:07:47.907953 kubelet[3388]: E0527 17:07:47.907720 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7949bf5b56-xhxpw" podUID="eed5dbb2-df5f-407e-aeca-9cb6ce8022ee" May 27 17:07:50.907342 kubelet[3388]: E0527 17:07:50.907305 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-sxz5s" podUID="3a0c508c-f41a-47cd-aff0-78e2be619952" May 27 17:07:52.395005 systemd[1]: Started sshd@7-10.200.20.19:22-10.200.16.10:44690.service - OpenSSH per-connection server daemon (10.200.16.10:44690). May 27 17:07:52.853758 sshd[5917]: Accepted publickey for core from 10.200.16.10 port 44690 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:07:52.855397 sshd-session[5917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:07:52.859802 systemd-logind[1853]: New session 10 of user core. May 27 17:07:52.866028 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 17:07:53.326644 sshd[5925]: Connection closed by 10.200.16.10 port 44690 May 27 17:07:53.327199 sshd-session[5917]: pam_unix(sshd:session): session closed for user core May 27 17:07:53.330498 systemd[1]: sshd@7-10.200.20.19:22-10.200.16.10:44690.service: Deactivated successfully. May 27 17:07:53.332290 systemd[1]: session-10.scope: Deactivated successfully. May 27 17:07:53.333435 systemd-logind[1853]: Session 10 logged out. Waiting for processes to exit. May 27 17:07:53.334986 systemd-logind[1853]: Removed session 10. May 27 17:07:58.422047 systemd[1]: Started sshd@8-10.200.20.19:22-10.200.16.10:44706.service - OpenSSH per-connection server daemon (10.200.16.10:44706). May 27 17:07:58.916745 sshd[5941]: Accepted publickey for core from 10.200.16.10 port 44706 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:07:58.918104 sshd-session[5941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:07:58.924543 systemd-logind[1853]: New session 11 of user core. May 27 17:07:58.930055 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 17:07:59.316281 sshd[5943]: Connection closed by 10.200.16.10 port 44706 May 27 17:07:59.316851 sshd-session[5941]: pam_unix(sshd:session): session closed for user core May 27 17:07:59.322930 systemd-logind[1853]: Session 11 logged out. Waiting for processes to exit. May 27 17:07:59.323088 systemd[1]: sshd@8-10.200.20.19:22-10.200.16.10:44706.service: Deactivated successfully. May 27 17:07:59.325145 systemd[1]: session-11.scope: Deactivated successfully. May 27 17:07:59.327631 systemd-logind[1853]: Removed session 11. May 27 17:08:02.908170 kubelet[3388]: E0527 17:08:02.907665 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-sxz5s" podUID="3a0c508c-f41a-47cd-aff0-78e2be619952" May 27 17:08:02.909138 containerd[1873]: time="2025-05-27T17:08:02.908080326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:08:03.085843 containerd[1873]: time="2025-05-27T17:08:03.085577676Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:08:03.088594 containerd[1873]: time="2025-05-27T17:08:03.088476100Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:08:03.088594 containerd[1873]: time="2025-05-27T17:08:03.088481549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:08:03.088773 kubelet[3388]: E0527 17:08:03.088715 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:08:03.088882 kubelet[3388]: E0527 17:08:03.088767 3388 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:08:03.088928 kubelet[3388]: E0527 17:08:03.088882 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:228cc12306484fbba2733b0b66c92409,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-glfbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7949bf5b56-xhxpw_calico-system(eed5dbb2-df5f-407e-aeca-9cb6ce8022ee): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:08:03.091005 containerd[1873]: time="2025-05-27T17:08:03.090964718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:08:03.289694 containerd[1873]: time="2025-05-27T17:08:03.289495656Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:08:03.292827 containerd[1873]: time="2025-05-27T17:08:03.292755357Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:08:03.293011 containerd[1873]: time="2025-05-27T17:08:03.292792375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:08:03.293104 kubelet[3388]: E0527 17:08:03.293056 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:08:03.293155 kubelet[3388]: E0527 17:08:03.293113 3388 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:08:03.295774 kubelet[3388]: E0527 17:08:03.293214 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glfbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7949bf5b56-xhxpw_calico-system(eed5dbb2-df5f-407e-aeca-9cb6ce8022ee): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:08:03.296926 kubelet[3388]: E0527 17:08:03.296888 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7949bf5b56-xhxpw" podUID="eed5dbb2-df5f-407e-aeca-9cb6ce8022ee" May 27 17:08:04.397430 systemd[1]: Started sshd@9-10.200.20.19:22-10.200.16.10:38290.service - OpenSSH per-connection server daemon (10.200.16.10:38290). May 27 17:08:04.846731 sshd[5955]: Accepted publickey for core from 10.200.16.10 port 38290 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:08:04.848025 sshd-session[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:08:04.852267 systemd-logind[1853]: New session 12 of user core. May 27 17:08:04.863041 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 17:08:05.231234 sshd[5957]: Connection closed by 10.200.16.10 port 38290 May 27 17:08:05.230530 sshd-session[5955]: pam_unix(sshd:session): session closed for user core May 27 17:08:05.234256 systemd[1]: sshd@9-10.200.20.19:22-10.200.16.10:38290.service: Deactivated successfully. May 27 17:08:05.236037 systemd[1]: session-12.scope: Deactivated successfully. May 27 17:08:05.236738 systemd-logind[1853]: Session 12 logged out. Waiting for processes to exit. May 27 17:08:05.238139 systemd-logind[1853]: Removed session 12. May 27 17:08:05.321188 systemd[1]: Started sshd@10-10.200.20.19:22-10.200.16.10:38294.service - OpenSSH per-connection server daemon (10.200.16.10:38294). May 27 17:08:05.810093 sshd[5970]: Accepted publickey for core from 10.200.16.10 port 38294 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:08:05.811450 sshd-session[5970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:08:05.816220 systemd-logind[1853]: New session 13 of user core. May 27 17:08:05.819009 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 17:08:06.236293 sshd[5972]: Connection closed by 10.200.16.10 port 38294 May 27 17:08:06.237147 sshd-session[5970]: pam_unix(sshd:session): session closed for user core May 27 17:08:06.239904 systemd[1]: sshd@10-10.200.20.19:22-10.200.16.10:38294.service: Deactivated successfully. May 27 17:08:06.242920 systemd[1]: session-13.scope: Deactivated successfully. May 27 17:08:06.245606 systemd-logind[1853]: Session 13 logged out. Waiting for processes to exit. May 27 17:08:06.247886 systemd-logind[1853]: Removed session 13. May 27 17:08:06.332598 systemd[1]: Started sshd@11-10.200.20.19:22-10.200.16.10:38304.service - OpenSSH per-connection server daemon (10.200.16.10:38304). May 27 17:08:06.813968 sshd[5982]: Accepted publickey for core from 10.200.16.10 port 38304 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:08:06.815669 sshd-session[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:08:06.820017 systemd-logind[1853]: New session 14 of user core. May 27 17:08:06.825989 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 17:08:07.208106 sshd[5984]: Connection closed by 10.200.16.10 port 38304 May 27 17:08:07.208714 sshd-session[5982]: pam_unix(sshd:session): session closed for user core May 27 17:08:07.212238 systemd[1]: sshd@11-10.200.20.19:22-10.200.16.10:38304.service: Deactivated successfully. May 27 17:08:07.214157 systemd[1]: session-14.scope: Deactivated successfully. May 27 17:08:07.215124 systemd-logind[1853]: Session 14 logged out. Waiting for processes to exit. May 27 17:08:07.217670 systemd-logind[1853]: Removed session 14. May 27 17:08:11.097458 containerd[1873]: time="2025-05-27T17:08:11.097359151Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33655011c3f31a98deafe4f399cfe9042fa15d00a3ff50435630c3e1af08707b\" id:\"246638cf9f74abb0d564796a99e8579b7b39ece106ca91d91487848f68db728e\" pid:6014 exit_status:1 exited_at:{seconds:1748365691 nanos:97051396}" May 27 17:08:12.291106 systemd[1]: Started sshd@12-10.200.20.19:22-10.200.16.10:34354.service - OpenSSH per-connection server daemon (10.200.16.10:34354). May 27 17:08:12.746456 sshd[6026]: Accepted publickey for core from 10.200.16.10 port 34354 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:08:12.747788 sshd-session[6026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:08:12.752370 systemd-logind[1853]: New session 15 of user core. May 27 17:08:12.760001 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 17:08:13.135205 sshd[6028]: Connection closed by 10.200.16.10 port 34354 May 27 17:08:13.135817 sshd-session[6026]: pam_unix(sshd:session): session closed for user core May 27 17:08:13.139323 systemd-logind[1853]: Session 15 logged out. Waiting for processes to exit. May 27 17:08:13.139843 systemd[1]: sshd@12-10.200.20.19:22-10.200.16.10:34354.service: Deactivated successfully. May 27 17:08:13.142396 systemd[1]: session-15.scope: Deactivated successfully. May 27 17:08:13.144861 systemd-logind[1853]: Removed session 15. May 27 17:08:15.908687 kubelet[3388]: E0527 17:08:15.908639 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7949bf5b56-xhxpw" podUID="eed5dbb2-df5f-407e-aeca-9cb6ce8022ee" May 27 17:08:16.907925 containerd[1873]: time="2025-05-27T17:08:16.907859824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:08:16.982482 containerd[1873]: time="2025-05-27T17:08:16.982442725Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a0e0aacc03b0c73aa876b98bb93fe35c8ddb2a45a194564efb75b142117d183\" id:\"649e75eccd864881511912b1632480af93f16e4d84d677559b0f7a73bdb9c240\" pid:6054 exited_at:{seconds:1748365696 nanos:981760510}" May 27 17:08:17.076036 containerd[1873]: time="2025-05-27T17:08:17.075942258Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:08:17.079430 containerd[1873]: time="2025-05-27T17:08:17.079368591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:08:17.079560 containerd[1873]: time="2025-05-27T17:08:17.079450098Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:08:17.079721 kubelet[3388]: E0527 17:08:17.079677 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:08:17.080044 kubelet[3388]: E0527 17:08:17.079737 3388 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:08:17.080044 kubelet[3388]: E0527 17:08:17.079895 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w2hdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-sxz5s_calico-system(3a0c508c-f41a-47cd-aff0-78e2be619952): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:08:17.081367 kubelet[3388]: E0527 17:08:17.081284 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-sxz5s" podUID="3a0c508c-f41a-47cd-aff0-78e2be619952" May 27 17:08:17.113264 containerd[1873]: time="2025-05-27T17:08:17.113221104Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a0e0aacc03b0c73aa876b98bb93fe35c8ddb2a45a194564efb75b142117d183\" id:\"0447834b4074a8912860930fa6092f69376e03d2ae157676dbd4742187bb8ffb\" pid:6076 exited_at:{seconds:1748365697 nanos:112896637}" May 27 17:08:18.227440 systemd[1]: Started sshd@13-10.200.20.19:22-10.200.16.10:34362.service - OpenSSH per-connection server daemon (10.200.16.10:34362). May 27 17:08:18.724114 sshd[6087]: Accepted publickey for core from 10.200.16.10 port 34362 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:08:18.725409 sshd-session[6087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:08:18.729971 systemd-logind[1853]: New session 16 of user core. May 27 17:08:18.737008 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 17:08:19.118144 sshd[6089]: Connection closed by 10.200.16.10 port 34362 May 27 17:08:19.119000 sshd-session[6087]: pam_unix(sshd:session): session closed for user core May 27 17:08:19.121965 systemd[1]: sshd@13-10.200.20.19:22-10.200.16.10:34362.service: Deactivated successfully. May 27 17:08:19.123684 systemd[1]: session-16.scope: Deactivated successfully. May 27 17:08:19.125241 systemd-logind[1853]: Session 16 logged out. Waiting for processes to exit. May 27 17:08:19.127340 systemd-logind[1853]: Removed session 16. May 27 17:08:24.202076 systemd[1]: Started sshd@14-10.200.20.19:22-10.200.16.10:56374.service - OpenSSH per-connection server daemon (10.200.16.10:56374). May 27 17:08:24.662048 sshd[6122]: Accepted publickey for core from 10.200.16.10 port 56374 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:08:24.663277 sshd-session[6122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:08:24.670011 systemd-logind[1853]: New session 17 of user core. May 27 17:08:24.677031 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 17:08:25.058316 sshd[6127]: Connection closed by 10.200.16.10 port 56374 May 27 17:08:25.057409 sshd-session[6122]: pam_unix(sshd:session): session closed for user core May 27 17:08:25.060996 systemd-logind[1853]: Session 17 logged out. Waiting for processes to exit. May 27 17:08:25.061298 systemd[1]: sshd@14-10.200.20.19:22-10.200.16.10:56374.service: Deactivated successfully. May 27 17:08:25.063454 systemd[1]: session-17.scope: Deactivated successfully. May 27 17:08:25.067934 systemd-logind[1853]: Removed session 17. May 27 17:08:25.147284 systemd[1]: Started sshd@15-10.200.20.19:22-10.200.16.10:56382.service - OpenSSH per-connection server daemon (10.200.16.10:56382). May 27 17:08:25.611084 sshd[6138]: Accepted publickey for core from 10.200.16.10 port 56382 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:08:25.612336 sshd-session[6138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:08:25.616552 systemd-logind[1853]: New session 18 of user core. May 27 17:08:25.620993 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 17:08:26.060138 sshd[6140]: Connection closed by 10.200.16.10 port 56382 May 27 17:08:26.059547 sshd-session[6138]: pam_unix(sshd:session): session closed for user core May 27 17:08:26.062816 systemd-logind[1853]: Session 18 logged out. Waiting for processes to exit. May 27 17:08:26.063272 systemd[1]: sshd@15-10.200.20.19:22-10.200.16.10:56382.service: Deactivated successfully. May 27 17:08:26.066049 systemd[1]: session-18.scope: Deactivated successfully. May 27 17:08:26.068977 systemd-logind[1853]: Removed session 18. May 27 17:08:26.148357 systemd[1]: Started sshd@16-10.200.20.19:22-10.200.16.10:56388.service - OpenSSH per-connection server daemon (10.200.16.10:56388). May 27 17:08:26.649853 sshd[6150]: Accepted publickey for core from 10.200.16.10 port 56388 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:08:26.651409 sshd-session[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:08:26.657434 systemd-logind[1853]: New session 19 of user core. May 27 17:08:26.667041 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 17:08:27.692888 sshd[6152]: Connection closed by 10.200.16.10 port 56388 May 27 17:08:27.693752 sshd-session[6150]: pam_unix(sshd:session): session closed for user core May 27 17:08:27.697477 systemd[1]: sshd@16-10.200.20.19:22-10.200.16.10:56388.service: Deactivated successfully. May 27 17:08:27.699169 systemd[1]: session-19.scope: Deactivated successfully. May 27 17:08:27.700440 systemd-logind[1853]: Session 19 logged out. Waiting for processes to exit. May 27 17:08:27.702656 systemd-logind[1853]: Removed session 19. May 27 17:08:27.786653 systemd[1]: Started sshd@17-10.200.20.19:22-10.200.16.10:56402.service - OpenSSH per-connection server daemon (10.200.16.10:56402). May 27 17:08:28.277798 sshd[6171]: Accepted publickey for core from 10.200.16.10 port 56402 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:08:28.279181 sshd-session[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:08:28.283268 systemd-logind[1853]: New session 20 of user core. May 27 17:08:28.289857 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 17:08:28.765220 sshd[6173]: Connection closed by 10.200.16.10 port 56402 May 27 17:08:28.764317 sshd-session[6171]: pam_unix(sshd:session): session closed for user core May 27 17:08:28.767924 systemd[1]: sshd@17-10.200.20.19:22-10.200.16.10:56402.service: Deactivated successfully. May 27 17:08:28.769437 systemd[1]: session-20.scope: Deactivated successfully. May 27 17:08:28.770671 systemd-logind[1853]: Session 20 logged out. Waiting for processes to exit. May 27 17:08:28.772174 systemd-logind[1853]: Removed session 20. May 27 17:08:28.855698 systemd[1]: Started sshd@18-10.200.20.19:22-10.200.16.10:36046.service - OpenSSH per-connection server daemon (10.200.16.10:36046). May 27 17:08:28.908655 kubelet[3388]: E0527 17:08:28.908613 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-sxz5s" podUID="3a0c508c-f41a-47cd-aff0-78e2be619952" May 27 17:08:29.340200 sshd[6183]: Accepted publickey for core from 10.200.16.10 port 36046 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:08:29.341558 sshd-session[6183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:08:29.349119 systemd-logind[1853]: New session 21 of user core. May 27 17:08:29.355330 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 17:08:29.743693 sshd[6185]: Connection closed by 10.200.16.10 port 36046 May 27 17:08:29.742799 sshd-session[6183]: pam_unix(sshd:session): session closed for user core May 27 17:08:29.748617 systemd[1]: sshd@18-10.200.20.19:22-10.200.16.10:36046.service: Deactivated successfully. May 27 17:08:29.750689 systemd[1]: session-21.scope: Deactivated successfully. May 27 17:08:29.754615 systemd-logind[1853]: Session 21 logged out. Waiting for processes to exit. May 27 17:08:29.757100 systemd-logind[1853]: Removed session 21. May 27 17:08:30.912599 kubelet[3388]: E0527 17:08:30.912409 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7949bf5b56-xhxpw" podUID="eed5dbb2-df5f-407e-aeca-9cb6ce8022ee" May 27 17:08:34.826420 systemd[1]: Started sshd@19-10.200.20.19:22-10.200.16.10:36054.service - OpenSSH per-connection server daemon (10.200.16.10:36054). May 27 17:08:35.282937 sshd[6201]: Accepted publickey for core from 10.200.16.10 port 36054 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:08:35.284259 sshd-session[6201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:08:35.288804 systemd-logind[1853]: New session 22 of user core. May 27 17:08:35.294193 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 17:08:35.660221 sshd[6207]: Connection closed by 10.200.16.10 port 36054 May 27 17:08:35.661076 sshd-session[6201]: pam_unix(sshd:session): session closed for user core May 27 17:08:35.663949 systemd[1]: sshd@19-10.200.20.19:22-10.200.16.10:36054.service: Deactivated successfully. May 27 17:08:35.665626 systemd[1]: session-22.scope: Deactivated successfully. May 27 17:08:35.668307 systemd-logind[1853]: Session 22 logged out. Waiting for processes to exit. May 27 17:08:35.669231 systemd-logind[1853]: Removed session 22. May 27 17:08:39.908346 kubelet[3388]: E0527 17:08:39.908087 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-sxz5s" podUID="3a0c508c-f41a-47cd-aff0-78e2be619952" May 27 17:08:40.750735 systemd[1]: Started sshd@20-10.200.20.19:22-10.200.16.10:58960.service - OpenSSH per-connection server daemon (10.200.16.10:58960). May 27 17:08:41.108313 containerd[1873]: time="2025-05-27T17:08:41.108092557Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33655011c3f31a98deafe4f399cfe9042fa15d00a3ff50435630c3e1af08707b\" id:\"5d9cb6ff83ebafad3363db1dfd220c97675413ce881823701f8aa5aa245bd9ab\" pid:6235 exited_at:{seconds:1748365721 nanos:107757753}" May 27 17:08:41.244938 sshd[6221]: Accepted publickey for core from 10.200.16.10 port 58960 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:08:41.247768 sshd-session[6221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:08:41.253399 systemd-logind[1853]: New session 23 of user core. May 27 17:08:41.261132 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 17:08:41.677499 sshd[6246]: Connection closed by 10.200.16.10 port 58960 May 27 17:08:41.677388 sshd-session[6221]: pam_unix(sshd:session): session closed for user core May 27 17:08:41.680666 systemd-logind[1853]: Session 23 logged out. Waiting for processes to exit. May 27 17:08:41.681008 systemd[1]: sshd@20-10.200.20.19:22-10.200.16.10:58960.service: Deactivated successfully. May 27 17:08:41.684183 systemd[1]: session-23.scope: Deactivated successfully. May 27 17:08:41.686318 systemd-logind[1853]: Removed session 23. May 27 17:08:45.908778 kubelet[3388]: E0527 17:08:45.908536 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7949bf5b56-xhxpw" podUID="eed5dbb2-df5f-407e-aeca-9cb6ce8022ee" May 27 17:08:46.759111 systemd[1]: Started sshd@21-10.200.20.19:22-10.200.16.10:58970.service - OpenSSH per-connection server daemon (10.200.16.10:58970). May 27 17:08:47.108963 containerd[1873]: time="2025-05-27T17:08:47.108864529Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a0e0aacc03b0c73aa876b98bb93fe35c8ddb2a45a194564efb75b142117d183\" id:\"c3439871470368c7cee5fe0fc6e123fbe8aee0271bd681ba696f460ce48ff581\" pid:6273 exited_at:{seconds:1748365727 nanos:108598416}" May 27 17:08:47.210075 sshd[6259]: Accepted publickey for core from 10.200.16.10 port 58970 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:08:47.211342 sshd-session[6259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:08:47.215938 systemd-logind[1853]: New session 24 of user core. May 27 17:08:47.220156 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 17:08:47.590756 sshd[6282]: Connection closed by 10.200.16.10 port 58970 May 27 17:08:47.590278 sshd-session[6259]: pam_unix(sshd:session): session closed for user core May 27 17:08:47.593841 systemd-logind[1853]: Session 24 logged out. Waiting for processes to exit. May 27 17:08:47.594514 systemd[1]: sshd@21-10.200.20.19:22-10.200.16.10:58970.service: Deactivated successfully. May 27 17:08:47.597584 systemd[1]: session-24.scope: Deactivated successfully. May 27 17:08:47.599791 systemd-logind[1853]: Removed session 24. May 27 17:08:52.672432 systemd[1]: Started sshd@22-10.200.20.19:22-10.200.16.10:51988.service - OpenSSH per-connection server daemon (10.200.16.10:51988). May 27 17:08:53.128239 sshd[6293]: Accepted publickey for core from 10.200.16.10 port 51988 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:08:53.129506 sshd-session[6293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:08:53.133677 systemd-logind[1853]: New session 25 of user core. May 27 17:08:53.141233 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 17:08:53.529212 sshd[6295]: Connection closed by 10.200.16.10 port 51988 May 27 17:08:53.528629 sshd-session[6293]: pam_unix(sshd:session): session closed for user core May 27 17:08:53.532463 systemd-logind[1853]: Session 25 logged out. Waiting for processes to exit. May 27 17:08:53.532465 systemd[1]: sshd@22-10.200.20.19:22-10.200.16.10:51988.service: Deactivated successfully. May 27 17:08:53.534177 systemd[1]: session-25.scope: Deactivated successfully. May 27 17:08:53.536400 systemd-logind[1853]: Removed session 25. May 27 17:08:53.907920 kubelet[3388]: E0527 17:08:53.907872 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-sxz5s" podUID="3a0c508c-f41a-47cd-aff0-78e2be619952" May 27 17:08:58.617058 systemd[1]: Started sshd@23-10.200.20.19:22-10.200.16.10:33684.service - OpenSSH per-connection server daemon (10.200.16.10:33684). May 27 17:08:59.109696 sshd[6308]: Accepted publickey for core from 10.200.16.10 port 33684 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:08:59.112419 sshd-session[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:08:59.120585 systemd-logind[1853]: New session 26 of user core. May 27 17:08:59.177285 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 17:08:59.522471 sshd[6311]: Connection closed by 10.200.16.10 port 33684 May 27 17:08:59.523074 sshd-session[6308]: pam_unix(sshd:session): session closed for user core May 27 17:08:59.527064 systemd[1]: sshd@23-10.200.20.19:22-10.200.16.10:33684.service: Deactivated successfully. May 27 17:08:59.527262 systemd-logind[1853]: Session 26 logged out. Waiting for processes to exit. May 27 17:08:59.529508 systemd[1]: session-26.scope: Deactivated successfully. May 27 17:08:59.532458 systemd-logind[1853]: Removed session 26. May 27 17:09:00.908138 kubelet[3388]: E0527 17:09:00.908051 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7949bf5b56-xhxpw" podUID="eed5dbb2-df5f-407e-aeca-9cb6ce8022ee" May 27 17:09:04.604241 systemd[1]: Started sshd@24-10.200.20.19:22-10.200.16.10:33698.service - OpenSSH per-connection server daemon (10.200.16.10:33698). May 27 17:09:05.060489 sshd[6323]: Accepted publickey for core from 10.200.16.10 port 33698 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:09:05.061677 sshd-session[6323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:09:05.065760 systemd-logind[1853]: New session 27 of user core. May 27 17:09:05.074041 systemd[1]: Started session-27.scope - Session 27 of User core. May 27 17:09:05.440022 sshd[6325]: Connection closed by 10.200.16.10 port 33698 May 27 17:09:05.440382 sshd-session[6323]: pam_unix(sshd:session): session closed for user core May 27 17:09:05.444022 systemd[1]: sshd@24-10.200.20.19:22-10.200.16.10:33698.service: Deactivated successfully. May 27 17:09:05.446386 systemd[1]: session-27.scope: Deactivated successfully. May 27 17:09:05.447475 systemd-logind[1853]: Session 27 logged out. Waiting for processes to exit. May 27 17:09:05.449356 systemd-logind[1853]: Removed session 27. May 27 17:09:05.907858 kubelet[3388]: E0527 17:09:05.907509 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-sxz5s" podUID="3a0c508c-f41a-47cd-aff0-78e2be619952"