May 27 17:02:05.106397 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] May 27 17:02:05.106416 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 27 15:31:23 -00 2025 May 27 17:02:05.106422 kernel: KASLR enabled May 27 17:02:05.106426 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 27 17:02:05.106432 kernel: printk: legacy bootconsole [pl11] enabled May 27 17:02:05.106435 kernel: efi: EFI v2.7 by EDK II May 27 17:02:05.106440 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e018 RNG=0x3fd5f998 MEMRESERVE=0x3e471598 May 27 17:02:05.106444 kernel: random: crng init done May 27 17:02:05.106449 kernel: secureboot: Secure boot disabled May 27 17:02:05.106452 kernel: ACPI: Early table checksum verification disabled May 27 17:02:05.106456 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 27 17:02:05.106460 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:02:05.106464 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:02:05.106469 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 27 17:02:05.106474 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:02:05.106478 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:02:05.106482 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:02:05.106487 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:02:05.106492 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:02:05.106496 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:02:05.106500 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 27 17:02:05.106504 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:02:05.106508 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 27 17:02:05.106513 kernel: ACPI: Use ACPI SPCR as default console: Yes May 27 17:02:05.106517 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug May 27 17:02:05.106521 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug May 27 17:02:05.106525 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug May 27 17:02:05.106529 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug May 27 17:02:05.106534 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug May 27 17:02:05.106539 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug May 27 17:02:05.106543 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug May 27 17:02:05.106547 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug May 27 17:02:05.106551 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug May 27 17:02:05.106555 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug May 27 17:02:05.106559 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug May 27 17:02:05.106563 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug May 27 17:02:05.106568 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] May 27 17:02:05.106572 kernel: NODE_DATA(0) allocated [mem 0x1bf7fddc0-0x1bf804fff] May 27 17:02:05.106576 kernel: Zone ranges: May 27 17:02:05.106580 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 27 17:02:05.106587 kernel: DMA32 empty May 27 17:02:05.106591 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 27 17:02:05.106596 kernel: Device empty May 27 17:02:05.106600 kernel: Movable zone start for each node May 27 17:02:05.106604 kernel: Early memory node ranges May 27 17:02:05.106610 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 27 17:02:05.106614 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] May 27 17:02:05.106618 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] May 27 17:02:05.106633 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] May 27 17:02:05.106638 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 27 17:02:05.106642 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 27 17:02:05.106646 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 27 17:02:05.106650 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 27 17:02:05.106655 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 27 17:02:05.106659 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 27 17:02:05.106663 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 27 17:02:05.106668 kernel: psci: probing for conduit method from ACPI. May 27 17:02:05.106674 kernel: psci: PSCIv1.1 detected in firmware. May 27 17:02:05.106678 kernel: psci: Using standard PSCI v0.2 function IDs May 27 17:02:05.106682 kernel: psci: MIGRATE_INFO_TYPE not supported. May 27 17:02:05.106687 kernel: psci: SMC Calling Convention v1.4 May 27 17:02:05.106691 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 27 17:02:05.106695 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 May 27 17:02:05.106700 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 27 17:02:05.106704 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 27 17:02:05.106708 kernel: pcpu-alloc: [0] 0 [0] 1 May 27 17:02:05.106713 kernel: Detected PIPT I-cache on CPU0 May 27 17:02:05.106717 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) May 27 17:02:05.106722 kernel: CPU features: detected: GIC system register CPU interface May 27 17:02:05.106727 kernel: CPU features: detected: Spectre-v4 May 27 17:02:05.106731 kernel: CPU features: detected: Spectre-BHB May 27 17:02:05.106735 kernel: CPU features: kernel page table isolation forced ON by KASLR May 27 17:02:05.106740 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 27 17:02:05.106744 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 May 27 17:02:05.106749 kernel: CPU features: detected: SSBS not fully self-synchronizing May 27 17:02:05.106753 kernel: alternatives: applying boot alternatives May 27 17:02:05.106758 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=4e706b869299e1c88703222069cdfa08c45ebce568f762053eea5b3f5f0939c3 May 27 17:02:05.106763 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 17:02:05.106767 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 17:02:05.106772 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 17:02:05.106777 kernel: Fallback order for Node 0: 0 May 27 17:02:05.106781 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 May 27 17:02:05.106785 kernel: Policy zone: Normal May 27 17:02:05.106790 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 17:02:05.106794 kernel: software IO TLB: area num 2. May 27 17:02:05.106798 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) May 27 17:02:05.106803 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 17:02:05.106807 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 17:02:05.106812 kernel: rcu: RCU event tracing is enabled. May 27 17:02:05.106817 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 17:02:05.106822 kernel: Trampoline variant of Tasks RCU enabled. May 27 17:02:05.106827 kernel: Tracing variant of Tasks RCU enabled. May 27 17:02:05.106831 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 17:02:05.106836 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 17:02:05.106840 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:02:05.106844 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:02:05.106849 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 27 17:02:05.106853 kernel: GICv3: 960 SPIs implemented May 27 17:02:05.106857 kernel: GICv3: 0 Extended SPIs implemented May 27 17:02:05.106862 kernel: Root IRQ handler: gic_handle_irq May 27 17:02:05.106866 kernel: GICv3: GICv3 features: 16 PPIs, RSS May 27 17:02:05.106870 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 May 27 17:02:05.106875 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 27 17:02:05.106880 kernel: ITS: No ITS available, not enabling LPIs May 27 17:02:05.106884 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 17:02:05.106889 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). May 27 17:02:05.106893 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 17:02:05.106898 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns May 27 17:02:05.106902 kernel: Console: colour dummy device 80x25 May 27 17:02:05.106907 kernel: printk: legacy console [tty1] enabled May 27 17:02:05.106911 kernel: ACPI: Core revision 20240827 May 27 17:02:05.106916 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) May 27 17:02:05.106921 kernel: pid_max: default: 32768 minimum: 301 May 27 17:02:05.106926 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 17:02:05.106930 kernel: landlock: Up and running. May 27 17:02:05.106935 kernel: SELinux: Initializing. May 27 17:02:05.106939 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 17:02:05.106944 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 17:02:05.106952 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 May 27 17:02:05.106958 kernel: Hyper-V: Host Build 10.0.26100.1254-1-0 May 27 17:02:05.106963 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 27 17:02:05.106968 kernel: rcu: Hierarchical SRCU implementation. May 27 17:02:05.106972 kernel: rcu: Max phase no-delay instances is 400. May 27 17:02:05.106977 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 17:02:05.106983 kernel: Remapping and enabling EFI services. May 27 17:02:05.106988 kernel: smp: Bringing up secondary CPUs ... May 27 17:02:05.106992 kernel: Detected PIPT I-cache on CPU1 May 27 17:02:05.106997 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 27 17:02:05.107002 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] May 27 17:02:05.107007 kernel: smp: Brought up 1 node, 2 CPUs May 27 17:02:05.107012 kernel: SMP: Total of 2 processors activated. May 27 17:02:05.107017 kernel: CPU: All CPU(s) started at EL1 May 27 17:02:05.107022 kernel: CPU features: detected: 32-bit EL0 Support May 27 17:02:05.107026 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 27 17:02:05.107031 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 27 17:02:05.107036 kernel: CPU features: detected: Common not Private translations May 27 17:02:05.107041 kernel: CPU features: detected: CRC32 instructions May 27 17:02:05.107045 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) May 27 17:02:05.107051 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 27 17:02:05.107056 kernel: CPU features: detected: LSE atomic instructions May 27 17:02:05.107060 kernel: CPU features: detected: Privileged Access Never May 27 17:02:05.107065 kernel: CPU features: detected: Speculation barrier (SB) May 27 17:02:05.107070 kernel: CPU features: detected: TLB range maintenance instructions May 27 17:02:05.107075 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 27 17:02:05.107079 kernel: CPU features: detected: Scalable Vector Extension May 27 17:02:05.107084 kernel: alternatives: applying system-wide alternatives May 27 17:02:05.107089 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 May 27 17:02:05.107094 kernel: SVE: maximum available vector length 16 bytes per vector May 27 17:02:05.107099 kernel: SVE: default vector length 16 bytes per vector May 27 17:02:05.107104 kernel: Memory: 3976112K/4194160K available (11072K kernel code, 2276K rwdata, 8936K rodata, 39424K init, 1034K bss, 213432K reserved, 0K cma-reserved) May 27 17:02:05.107109 kernel: devtmpfs: initialized May 27 17:02:05.107113 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 17:02:05.107118 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 17:02:05.107123 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 27 17:02:05.107128 kernel: 0 pages in range for non-PLT usage May 27 17:02:05.107132 kernel: 508544 pages in range for PLT usage May 27 17:02:05.107138 kernel: pinctrl core: initialized pinctrl subsystem May 27 17:02:05.107142 kernel: SMBIOS 3.1.0 present. May 27 17:02:05.107147 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 27 17:02:05.107152 kernel: DMI: Memory slots populated: 2/2 May 27 17:02:05.107157 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 17:02:05.107161 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 27 17:02:05.107166 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 27 17:02:05.107171 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 27 17:02:05.107176 kernel: audit: initializing netlink subsys (disabled) May 27 17:02:05.107182 kernel: audit: type=2000 audit(0.062:1): state=initialized audit_enabled=0 res=1 May 27 17:02:05.107186 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 17:02:05.107191 kernel: cpuidle: using governor menu May 27 17:02:05.107196 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 27 17:02:05.107200 kernel: ASID allocator initialised with 32768 entries May 27 17:02:05.107205 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 17:02:05.107210 kernel: Serial: AMBA PL011 UART driver May 27 17:02:05.107215 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 17:02:05.107220 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 27 17:02:05.107225 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 27 17:02:05.107230 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 27 17:02:05.107235 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 17:02:05.107239 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 27 17:02:05.107244 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 27 17:02:05.107249 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 27 17:02:05.107253 kernel: ACPI: Added _OSI(Module Device) May 27 17:02:05.107258 kernel: ACPI: Added _OSI(Processor Device) May 27 17:02:05.107263 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 17:02:05.107268 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 17:02:05.107273 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 17:02:05.107278 kernel: ACPI: Interpreter enabled May 27 17:02:05.107282 kernel: ACPI: Using GIC for interrupt routing May 27 17:02:05.107287 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 27 17:02:05.107292 kernel: printk: legacy console [ttyAMA0] enabled May 27 17:02:05.107297 kernel: printk: legacy bootconsole [pl11] disabled May 27 17:02:05.107301 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 27 17:02:05.107306 kernel: ACPI: CPU0 has been hot-added May 27 17:02:05.107312 kernel: ACPI: CPU1 has been hot-added May 27 17:02:05.107316 kernel: iommu: Default domain type: Translated May 27 17:02:05.107321 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 27 17:02:05.107326 kernel: efivars: Registered efivars operations May 27 17:02:05.107330 kernel: vgaarb: loaded May 27 17:02:05.107335 kernel: clocksource: Switched to clocksource arch_sys_counter May 27 17:02:05.107340 kernel: VFS: Disk quotas dquot_6.6.0 May 27 17:02:05.107345 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 17:02:05.107349 kernel: pnp: PnP ACPI init May 27 17:02:05.107355 kernel: pnp: PnP ACPI: found 0 devices May 27 17:02:05.107360 kernel: NET: Registered PF_INET protocol family May 27 17:02:05.107364 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 17:02:05.107369 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 17:02:05.107374 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 17:02:05.107379 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 17:02:05.107384 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 17:02:05.107389 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 17:02:05.107393 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 17:02:05.107399 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 17:02:05.107404 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 17:02:05.107408 kernel: PCI: CLS 0 bytes, default 64 May 27 17:02:05.107413 kernel: kvm [1]: HYP mode not available May 27 17:02:05.107418 kernel: Initialise system trusted keyrings May 27 17:02:05.107422 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 17:02:05.107427 kernel: Key type asymmetric registered May 27 17:02:05.107431 kernel: Asymmetric key parser 'x509' registered May 27 17:02:05.107436 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 27 17:02:05.107442 kernel: io scheduler mq-deadline registered May 27 17:02:05.107447 kernel: io scheduler kyber registered May 27 17:02:05.107452 kernel: io scheduler bfq registered May 27 17:02:05.107456 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 17:02:05.107461 kernel: thunder_xcv, ver 1.0 May 27 17:02:05.107466 kernel: thunder_bgx, ver 1.0 May 27 17:02:05.107470 kernel: nicpf, ver 1.0 May 27 17:02:05.107475 kernel: nicvf, ver 1.0 May 27 17:02:05.107594 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 27 17:02:05.107654 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-27T17:02:04 UTC (1748365324) May 27 17:02:05.107661 kernel: efifb: probing for efifb May 27 17:02:05.107666 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 27 17:02:05.107670 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 27 17:02:05.107675 kernel: efifb: scrolling: redraw May 27 17:02:05.107680 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 27 17:02:05.107685 kernel: Console: switching to colour frame buffer device 128x48 May 27 17:02:05.107690 kernel: fb0: EFI VGA frame buffer device May 27 17:02:05.107696 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 27 17:02:05.107701 kernel: hid: raw HID events driver (C) Jiri Kosina May 27 17:02:05.107706 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 27 17:02:05.107710 kernel: NET: Registered PF_INET6 protocol family May 27 17:02:05.107715 kernel: watchdog: NMI not fully supported May 27 17:02:05.107720 kernel: watchdog: Hard watchdog permanently disabled May 27 17:02:05.107725 kernel: Segment Routing with IPv6 May 27 17:02:05.107729 kernel: In-situ OAM (IOAM) with IPv6 May 27 17:02:05.107734 kernel: NET: Registered PF_PACKET protocol family May 27 17:02:05.107740 kernel: Key type dns_resolver registered May 27 17:02:05.107744 kernel: registered taskstats version 1 May 27 17:02:05.107749 kernel: Loading compiled-in X.509 certificates May 27 17:02:05.107754 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 8e5e45c34fa91568ef1fa3bdfd5a71a43b4c4580' May 27 17:02:05.107759 kernel: Demotion targets for Node 0: null May 27 17:02:05.107764 kernel: Key type .fscrypt registered May 27 17:02:05.107768 kernel: Key type fscrypt-provisioning registered May 27 17:02:05.107773 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 17:02:05.107778 kernel: ima: Allocated hash algorithm: sha1 May 27 17:02:05.107784 kernel: ima: No architecture policies found May 27 17:02:05.107788 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 27 17:02:05.107793 kernel: clk: Disabling unused clocks May 27 17:02:05.107798 kernel: PM: genpd: Disabling unused power domains May 27 17:02:05.107803 kernel: Warning: unable to open an initial console. May 27 17:02:05.107807 kernel: Freeing unused kernel memory: 39424K May 27 17:02:05.107812 kernel: Run /init as init process May 27 17:02:05.107817 kernel: with arguments: May 27 17:02:05.107821 kernel: /init May 27 17:02:05.107827 kernel: with environment: May 27 17:02:05.107831 kernel: HOME=/ May 27 17:02:05.107836 kernel: TERM=linux May 27 17:02:05.107841 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 17:02:05.107847 systemd[1]: Successfully made /usr/ read-only. May 27 17:02:05.107854 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:02:05.107860 systemd[1]: Detected virtualization microsoft. May 27 17:02:05.107866 systemd[1]: Detected architecture arm64. May 27 17:02:05.107871 systemd[1]: Running in initrd. May 27 17:02:05.107876 systemd[1]: No hostname configured, using default hostname. May 27 17:02:05.107881 systemd[1]: Hostname set to . May 27 17:02:05.107886 systemd[1]: Initializing machine ID from random generator. May 27 17:02:05.107891 systemd[1]: Queued start job for default target initrd.target. May 27 17:02:05.107896 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:02:05.107902 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:02:05.107909 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 17:02:05.107914 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:02:05.107919 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 17:02:05.107925 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 17:02:05.107931 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 17:02:05.107936 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 17:02:05.107942 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:02:05.107948 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:02:05.107953 systemd[1]: Reached target paths.target - Path Units. May 27 17:02:05.107958 systemd[1]: Reached target slices.target - Slice Units. May 27 17:02:05.107963 systemd[1]: Reached target swap.target - Swaps. May 27 17:02:05.107968 systemd[1]: Reached target timers.target - Timer Units. May 27 17:02:05.107974 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:02:05.107979 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:02:05.107984 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 17:02:05.107989 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 17:02:05.107996 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:02:05.108001 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:02:05.108006 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:02:05.108011 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:02:05.108016 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 17:02:05.108021 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:02:05.108027 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 17:02:05.108032 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 17:02:05.108038 systemd[1]: Starting systemd-fsck-usr.service... May 27 17:02:05.108044 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:02:05.108049 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:02:05.108054 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:02:05.108059 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 17:02:05.108077 systemd-journald[224]: Collecting audit messages is disabled. May 27 17:02:05.108092 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:02:05.108099 systemd-journald[224]: Journal started May 27 17:02:05.108114 systemd-journald[224]: Runtime Journal (/run/log/journal/2ac027c9ed5e4ae4a20696e884a033a8) is 8M, max 78.5M, 70.5M free. May 27 17:02:05.090974 systemd-modules-load[225]: Inserted module 'overlay' May 27 17:02:05.121869 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 17:02:05.130682 kernel: Bridge firewalling registered May 27 17:02:05.130736 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:02:05.130112 systemd-modules-load[225]: Inserted module 'br_netfilter' May 27 17:02:05.141807 systemd[1]: Finished systemd-fsck-usr.service. May 27 17:02:05.145136 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:02:05.152318 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:02:05.164139 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 17:02:05.182870 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:02:05.188425 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:02:05.209350 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:02:05.221046 systemd-tmpfiles[253]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 17:02:05.230119 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:02:05.241099 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:02:05.247642 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:02:05.256082 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:02:05.268729 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 17:02:05.289541 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:02:05.300637 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:02:05.318681 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:02:05.329335 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=4e706b869299e1c88703222069cdfa08c45ebce568f762053eea5b3f5f0939c3 May 27 17:02:05.364648 systemd-resolved[263]: Positive Trust Anchors: May 27 17:02:05.364664 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:02:05.364684 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:02:05.369643 systemd-resolved[263]: Defaulting to hostname 'linux'. May 27 17:02:05.370516 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:02:05.381379 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:02:05.435637 kernel: SCSI subsystem initialized May 27 17:02:05.441632 kernel: Loading iSCSI transport class v2.0-870. May 27 17:02:05.449660 kernel: iscsi: registered transport (tcp) May 27 17:02:05.462410 kernel: iscsi: registered transport (qla4xxx) May 27 17:02:05.462472 kernel: QLogic iSCSI HBA Driver May 27 17:02:05.477290 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:02:05.498761 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:02:05.505208 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:02:05.565445 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 17:02:05.572778 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 17:02:05.630644 kernel: raid6: neonx8 gen() 18546 MB/s May 27 17:02:05.649642 kernel: raid6: neonx4 gen() 18559 MB/s May 27 17:02:05.668640 kernel: raid6: neonx2 gen() 17014 MB/s May 27 17:02:05.688652 kernel: raid6: neonx1 gen() 15043 MB/s May 27 17:02:05.707659 kernel: raid6: int64x8 gen() 10535 MB/s May 27 17:02:05.726655 kernel: raid6: int64x4 gen() 10614 MB/s May 27 17:02:05.747671 kernel: raid6: int64x2 gen() 8980 MB/s May 27 17:02:05.769176 kernel: raid6: int64x1 gen() 6432 MB/s May 27 17:02:05.769251 kernel: raid6: using algorithm neonx4 gen() 18559 MB/s May 27 17:02:05.790836 kernel: raid6: .... xor() 15142 MB/s, rmw enabled May 27 17:02:05.790914 kernel: raid6: using neon recovery algorithm May 27 17:02:05.799589 kernel: xor: measuring software checksum speed May 27 17:02:05.799685 kernel: 8regs : 28563 MB/sec May 27 17:02:05.802064 kernel: 32regs : 28601 MB/sec May 27 17:02:05.804493 kernel: arm64_neon : 37398 MB/sec May 27 17:02:05.807195 kernel: xor: using function: arm64_neon (37398 MB/sec) May 27 17:02:05.846650 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 17:02:05.852162 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 17:02:05.862809 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:02:05.892591 systemd-udevd[474]: Using default interface naming scheme 'v255'. May 27 17:02:05.896833 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:02:05.907794 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 17:02:05.937190 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation May 27 17:02:05.960710 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:02:05.967028 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:02:06.020367 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:02:06.030829 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 17:02:06.099648 kernel: hv_vmbus: Vmbus version:5.3 May 27 17:02:06.101220 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:02:06.105742 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:02:06.121068 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:02:06.167735 kernel: pps_core: LinuxPPS API ver. 1 registered May 27 17:02:06.167762 kernel: hv_vmbus: registering driver hid_hyperv May 27 17:02:06.167772 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 27 17:02:06.167779 kernel: hv_vmbus: registering driver hyperv_keyboard May 27 17:02:06.167785 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 May 27 17:02:06.167791 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 May 27 17:02:06.167798 kernel: hv_vmbus: registering driver hv_netvsc May 27 17:02:06.167804 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 27 17:02:06.137303 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:02:06.181394 kernel: hv_vmbus: registering driver hv_storvsc May 27 17:02:06.181413 kernel: scsi host1: storvsc_host_t May 27 17:02:06.173060 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:02:06.196133 kernel: scsi host0: storvsc_host_t May 27 17:02:06.197141 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 27 17:02:06.197162 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:02:06.210518 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 May 27 17:02:06.210574 kernel: PTP clock support registered May 27 17:02:06.197251 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:02:06.221282 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:02:06.243686 kernel: hv_utils: Registering HyperV Utility Driver May 27 17:02:06.243753 kernel: hv_vmbus: registering driver hv_utils May 27 17:02:06.255794 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 27 17:02:06.256047 kernel: hv_utils: Heartbeat IC version 3.0 May 27 17:02:06.256057 kernel: hv_utils: Shutdown IC version 3.2 May 27 17:02:06.256064 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 27 17:02:06.260837 kernel: hv_utils: TimeSync IC version 4.0 May 27 17:02:06.260849 kernel: sd 0:0:0:0: [sda] Write Protect is off May 27 17:02:06.503537 systemd-resolved[263]: Clock change detected. Flushing caches. May 27 17:02:06.510191 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 27 17:02:06.510367 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 27 17:02:06.518022 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#64 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 27 17:02:06.523684 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:02:06.533835 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 27 17:02:06.543941 kernel: hv_netvsc 000d3ac6-026f-000d-3ac6-026f000d3ac6 eth0: VF slot 1 added May 27 17:02:06.553245 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 17:02:06.553304 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 27 17:02:06.562679 kernel: hv_vmbus: registering driver hv_pci May 27 17:02:06.562739 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 27 17:02:06.565942 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 27 17:02:06.565972 kernel: hv_pci e301acb5-9a0c-43ff-844c-a0a6b35b72a7: PCI VMBus probing: Using version 0x10004 May 27 17:02:06.577021 kernel: hv_pci e301acb5-9a0c-43ff-844c-a0a6b35b72a7: PCI host bridge to bus 9a0c:00 May 27 17:02:06.577161 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 27 17:02:06.577237 kernel: pci_bus 9a0c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 27 17:02:06.585297 kernel: pci_bus 9a0c:00: No busn resource found for root bus, will use [bus 00-ff] May 27 17:02:06.593067 kernel: pci 9a0c:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint May 27 17:02:06.598023 kernel: pci 9a0c:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] May 27 17:02:06.602099 kernel: pci 9a0c:00:02.0: enabling Extended Tags May 27 17:02:06.610021 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#68 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 27 17:02:06.622125 kernel: pci 9a0c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 9a0c:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) May 27 17:02:06.631581 kernel: pci_bus 9a0c:00: busn_res: [bus 00-ff] end is updated to 00 May 27 17:02:06.631813 kernel: pci 9a0c:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned May 27 17:02:06.652027 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#106 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 27 17:02:06.705540 kernel: mlx5_core 9a0c:00:02.0: enabling device (0000 -> 0002) May 27 17:02:06.713518 kernel: mlx5_core 9a0c:00:02.0: PTM is not supported by PCIe May 27 17:02:06.713730 kernel: mlx5_core 9a0c:00:02.0: firmware version: 16.30.5006 May 27 17:02:06.882628 kernel: hv_netvsc 000d3ac6-026f-000d-3ac6-026f000d3ac6 eth0: VF registering: eth1 May 27 17:02:06.882869 kernel: mlx5_core 9a0c:00:02.0 eth1: joined to eth0 May 27 17:02:06.889049 kernel: mlx5_core 9a0c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) May 27 17:02:06.898282 kernel: mlx5_core 9a0c:00:02.0 enP39436s1: renamed from eth1 May 27 17:02:07.486973 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 27 17:02:07.520036 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 27 17:02:07.534847 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 27 17:02:07.545652 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 27 17:02:07.557392 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 27 17:02:07.564176 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 17:02:07.598025 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 27 17:02:07.602147 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 17:02:07.610005 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#80 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 27 17:02:07.616016 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 17:02:07.776547 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 17:02:07.804714 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:02:07.810730 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:02:07.821278 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:02:07.832233 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 17:02:07.865642 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 17:02:08.624124 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#41 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 27 17:02:08.636781 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 17:02:08.636836 disk-uuid[646]: The operation has completed successfully. May 27 17:02:08.703629 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 17:02:08.707206 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 17:02:08.732839 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 17:02:08.753408 sh[817]: Success May 27 17:02:08.788835 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 17:02:08.788918 kernel: device-mapper: uevent: version 1.0.3 May 27 17:02:08.793591 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 17:02:08.804018 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 27 17:02:09.003249 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 17:02:09.013219 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 17:02:09.032044 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 17:02:09.057564 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 17:02:09.057644 kernel: BTRFS: device fsid 3c8c76ef-f1da-40fe-979d-11bdf765e403 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (835) May 27 17:02:09.067711 kernel: BTRFS info (device dm-0): first mount of filesystem 3c8c76ef-f1da-40fe-979d-11bdf765e403 May 27 17:02:09.067762 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 27 17:02:09.070709 kernel: BTRFS info (device dm-0): using free-space-tree May 27 17:02:09.417174 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 17:02:09.421179 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 17:02:09.428736 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 17:02:09.429706 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 17:02:09.449722 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 17:02:09.469472 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (858) May 27 17:02:09.479098 kernel: BTRFS info (device sda6): first mount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:02:09.479148 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 27 17:02:09.482408 kernel: BTRFS info (device sda6): using free-space-tree May 27 17:02:09.539022 kernel: BTRFS info (device sda6): last unmount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:02:09.540772 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 17:02:09.547793 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 17:02:09.570754 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:02:09.581661 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:02:09.615526 systemd-networkd[1004]: lo: Link UP May 27 17:02:09.615536 systemd-networkd[1004]: lo: Gained carrier May 27 17:02:09.616890 systemd-networkd[1004]: Enumeration completed May 27 17:02:09.619090 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:02:09.619115 systemd-networkd[1004]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:02:09.619118 systemd-networkd[1004]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:02:09.623692 systemd[1]: Reached target network.target - Network. May 27 17:02:09.695011 kernel: mlx5_core 9a0c:00:02.0 enP39436s1: Link up May 27 17:02:09.727203 kernel: hv_netvsc 000d3ac6-026f-000d-3ac6-026f000d3ac6 eth0: Data path switched to VF: enP39436s1 May 27 17:02:09.727842 systemd-networkd[1004]: enP39436s1: Link UP May 27 17:02:09.727899 systemd-networkd[1004]: eth0: Link UP May 27 17:02:09.728008 systemd-networkd[1004]: eth0: Gained carrier May 27 17:02:09.728026 systemd-networkd[1004]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:02:09.746288 systemd-networkd[1004]: enP39436s1: Gained carrier May 27 17:02:09.774100 systemd-networkd[1004]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 27 17:02:10.847163 systemd-networkd[1004]: enP39436s1: Gained IPv6LL May 27 17:02:10.847399 systemd-networkd[1004]: eth0: Gained IPv6LL May 27 17:02:11.173252 ignition[989]: Ignition 2.21.0 May 27 17:02:11.173267 ignition[989]: Stage: fetch-offline May 27 17:02:11.173344 ignition[989]: no configs at "/usr/lib/ignition/base.d" May 27 17:02:11.177734 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:02:11.173351 ignition[989]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:02:11.186114 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 17:02:11.173455 ignition[989]: parsed url from cmdline: "" May 27 17:02:11.173457 ignition[989]: no config URL provided May 27 17:02:11.173460 ignition[989]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:02:11.173466 ignition[989]: no config at "/usr/lib/ignition/user.ign" May 27 17:02:11.173469 ignition[989]: failed to fetch config: resource requires networking May 27 17:02:11.173612 ignition[989]: Ignition finished successfully May 27 17:02:11.218067 ignition[1015]: Ignition 2.21.0 May 27 17:02:11.218074 ignition[1015]: Stage: fetch May 27 17:02:11.218284 ignition[1015]: no configs at "/usr/lib/ignition/base.d" May 27 17:02:11.218291 ignition[1015]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:02:11.218368 ignition[1015]: parsed url from cmdline: "" May 27 17:02:11.218371 ignition[1015]: no config URL provided May 27 17:02:11.218377 ignition[1015]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:02:11.218383 ignition[1015]: no config at "/usr/lib/ignition/user.ign" May 27 17:02:11.218423 ignition[1015]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 27 17:02:11.320607 ignition[1015]: GET result: OK May 27 17:02:11.320721 ignition[1015]: config has been read from IMDS userdata May 27 17:02:11.320745 ignition[1015]: parsing config with SHA512: 60587795f829596299b1de2ac2ddd5fdd0c6ac3f546fdce5d2b024b26f54c9b07227fc69f66d73b2b4a4276633566fcd7320bb9f75ff4bb765259efeabbd0ab0 May 27 17:02:11.328227 unknown[1015]: fetched base config from "system" May 27 17:02:11.328236 unknown[1015]: fetched base config from "system" May 27 17:02:11.328601 ignition[1015]: fetch: fetch complete May 27 17:02:11.328240 unknown[1015]: fetched user config from "azure" May 27 17:02:11.328606 ignition[1015]: fetch: fetch passed May 27 17:02:11.330702 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 17:02:11.328675 ignition[1015]: Ignition finished successfully May 27 17:02:11.337195 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 17:02:11.378590 ignition[1022]: Ignition 2.21.0 May 27 17:02:11.381344 ignition[1022]: Stage: kargs May 27 17:02:11.381592 ignition[1022]: no configs at "/usr/lib/ignition/base.d" May 27 17:02:11.381601 ignition[1022]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:02:11.393971 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 17:02:11.382611 ignition[1022]: kargs: kargs passed May 27 17:02:11.400955 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 17:02:11.382675 ignition[1022]: Ignition finished successfully May 27 17:02:11.428236 ignition[1028]: Ignition 2.21.0 May 27 17:02:11.428251 ignition[1028]: Stage: disks May 27 17:02:11.432309 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 17:02:11.428449 ignition[1028]: no configs at "/usr/lib/ignition/base.d" May 27 17:02:11.439583 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 17:02:11.428458 ignition[1028]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:02:11.447683 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 17:02:11.429779 ignition[1028]: disks: disks passed May 27 17:02:11.455544 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:02:11.429852 ignition[1028]: Ignition finished successfully May 27 17:02:11.463709 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:02:11.472891 systemd[1]: Reached target basic.target - Basic System. May 27 17:02:11.482028 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 17:02:11.561293 systemd-fsck[1036]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks May 27 17:02:11.672891 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 17:02:11.679928 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 17:02:11.893010 kernel: EXT4-fs (sda9): mounted filesystem a5483afc-8426-4c3e-85ef-8146f9077e7d r/w with ordered data mode. Quota mode: none. May 27 17:02:11.893150 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 17:02:11.897097 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 17:02:11.920721 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:02:11.928238 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 17:02:11.939290 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 27 17:02:11.950023 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 17:02:11.950071 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:02:11.999546 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (1050) May 27 17:02:11.999572 kernel: BTRFS info (device sda6): first mount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:02:11.999579 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 27 17:02:11.999586 kernel: BTRFS info (device sda6): using free-space-tree May 27 17:02:11.955763 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 17:02:11.987272 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 17:02:12.009054 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:02:12.589981 coreos-metadata[1052]: May 27 17:02:12.589 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 27 17:02:12.596890 coreos-metadata[1052]: May 27 17:02:12.593 INFO Fetch successful May 27 17:02:12.596890 coreos-metadata[1052]: May 27 17:02:12.593 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 27 17:02:12.609379 coreos-metadata[1052]: May 27 17:02:12.603 INFO Fetch successful May 27 17:02:12.654444 coreos-metadata[1052]: May 27 17:02:12.654 INFO wrote hostname ci-4344.0.0-a-910621710e to /sysroot/etc/hostname May 27 17:02:12.660956 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 27 17:02:12.963424 initrd-setup-root[1080]: cut: /sysroot/etc/passwd: No such file or directory May 27 17:02:13.022377 initrd-setup-root[1087]: cut: /sysroot/etc/group: No such file or directory May 27 17:02:13.030015 initrd-setup-root[1094]: cut: /sysroot/etc/shadow: No such file or directory May 27 17:02:13.036321 initrd-setup-root[1101]: cut: /sysroot/etc/gshadow: No such file or directory May 27 17:02:14.139828 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 17:02:14.146162 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 17:02:14.168861 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 17:02:14.182870 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 17:02:14.190741 kernel: BTRFS info (device sda6): last unmount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:02:14.200882 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 17:02:14.212826 ignition[1171]: INFO : Ignition 2.21.0 May 27 17:02:14.212826 ignition[1171]: INFO : Stage: mount May 27 17:02:14.219326 ignition[1171]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:02:14.219326 ignition[1171]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:02:14.219326 ignition[1171]: INFO : mount: mount passed May 27 17:02:14.219326 ignition[1171]: INFO : Ignition finished successfully May 27 17:02:14.221060 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 17:02:14.236108 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 17:02:14.267081 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:02:14.299718 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (1181) May 27 17:02:14.299782 kernel: BTRFS info (device sda6): first mount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:02:14.304330 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 27 17:02:14.307651 kernel: BTRFS info (device sda6): using free-space-tree May 27 17:02:14.323603 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:02:14.353715 ignition[1199]: INFO : Ignition 2.21.0 May 27 17:02:14.353715 ignition[1199]: INFO : Stage: files May 27 17:02:14.362170 ignition[1199]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:02:14.362170 ignition[1199]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:02:14.362170 ignition[1199]: DEBUG : files: compiled without relabeling support, skipping May 27 17:02:14.485046 ignition[1199]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 17:02:14.485046 ignition[1199]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 17:02:14.550750 ignition[1199]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 17:02:14.556540 ignition[1199]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 17:02:14.556540 ignition[1199]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 17:02:14.551256 unknown[1199]: wrote ssh authorized keys file for user: core May 27 17:02:14.856849 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 27 17:02:14.865245 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 May 27 17:02:15.034943 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 17:02:15.147626 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 27 17:02:15.147626 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 17:02:15.165197 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 17:02:15.165197 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 17:02:15.165197 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 17:02:15.165197 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:02:15.165197 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:02:15.165197 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:02:15.165197 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:02:15.224095 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:02:15.224095 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:02:15.224095 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 27 17:02:15.224095 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 27 17:02:15.224095 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 27 17:02:15.224095 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 May 27 17:02:15.782111 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 17:02:15.980316 ignition[1199]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 27 17:02:15.980316 ignition[1199]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 17:02:16.156962 ignition[1199]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:02:16.175880 ignition[1199]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:02:16.175880 ignition[1199]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 17:02:16.175880 ignition[1199]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 27 17:02:16.209006 ignition[1199]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 27 17:02:16.209006 ignition[1199]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 17:02:16.209006 ignition[1199]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 17:02:16.209006 ignition[1199]: INFO : files: files passed May 27 17:02:16.209006 ignition[1199]: INFO : Ignition finished successfully May 27 17:02:16.187254 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 17:02:16.198449 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 17:02:16.238056 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 17:02:16.255315 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 17:02:16.255404 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 17:02:16.278396 initrd-setup-root-after-ignition[1227]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:02:16.278396 initrd-setup-root-after-ignition[1227]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 17:02:16.293472 initrd-setup-root-after-ignition[1231]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:02:16.293889 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:02:16.309048 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 17:02:16.318209 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 17:02:16.374155 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 17:02:16.374288 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 17:02:16.385201 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 17:02:16.395037 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 17:02:16.403698 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 17:02:16.404559 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 17:02:16.441263 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:02:16.449099 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 17:02:16.477285 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 17:02:16.482860 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:02:16.493445 systemd[1]: Stopped target timers.target - Timer Units. May 27 17:02:16.503770 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 17:02:16.503896 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:02:16.516687 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 17:02:16.521721 systemd[1]: Stopped target basic.target - Basic System. May 27 17:02:16.531258 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 17:02:16.541356 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:02:16.550652 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 17:02:16.560347 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 17:02:16.570755 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 17:02:16.580461 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:02:16.590958 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 17:02:16.604088 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 17:02:16.614671 systemd[1]: Stopped target swap.target - Swaps. May 27 17:02:16.622462 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 17:02:16.622587 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 17:02:16.635812 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 17:02:16.640761 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:02:16.650396 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 17:02:16.654461 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:02:16.660250 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 17:02:16.660366 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 17:02:16.672920 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 17:02:16.673030 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:02:16.682980 systemd[1]: ignition-files.service: Deactivated successfully. May 27 17:02:16.683090 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 17:02:16.692279 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 27 17:02:16.761315 ignition[1251]: INFO : Ignition 2.21.0 May 27 17:02:16.761315 ignition[1251]: INFO : Stage: umount May 27 17:02:16.761315 ignition[1251]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:02:16.761315 ignition[1251]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:02:16.761315 ignition[1251]: INFO : umount: umount passed May 27 17:02:16.761315 ignition[1251]: INFO : Ignition finished successfully May 27 17:02:16.692392 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 27 17:02:16.703131 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 17:02:16.714139 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 17:02:16.726218 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 17:02:16.726624 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:02:16.734185 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 17:02:16.734325 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:02:16.757407 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 17:02:16.759776 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 17:02:16.773310 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 17:02:16.775566 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 17:02:16.787732 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 17:02:16.789531 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 17:02:16.789630 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 17:02:16.795821 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 17:02:16.795899 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 17:02:16.805608 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 17:02:16.805670 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 17:02:16.813241 systemd[1]: Stopped target network.target - Network. May 27 17:02:16.827803 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 17:02:16.827889 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:02:16.838656 systemd[1]: Stopped target paths.target - Path Units. May 27 17:02:16.847256 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 17:02:16.851303 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:02:16.857302 systemd[1]: Stopped target slices.target - Slice Units. May 27 17:02:16.861634 systemd[1]: Stopped target sockets.target - Socket Units. May 27 17:02:16.870611 systemd[1]: iscsid.socket: Deactivated successfully. May 27 17:02:16.870669 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:02:16.881711 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 17:02:16.882072 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:02:16.894984 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 17:02:16.895092 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 17:02:16.904610 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 17:02:16.904656 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 17:02:16.914055 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 17:02:16.922269 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 17:02:16.954864 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 17:02:16.955052 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 17:02:16.968568 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 17:02:16.968785 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 17:02:16.972208 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 17:02:16.986656 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 17:02:16.987015 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 17:02:16.996090 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 17:02:16.996135 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 17:02:17.166760 kernel: hv_netvsc 000d3ac6-026f-000d-3ac6-026f000d3ac6 eth0: Data path switched from VF: enP39436s1 May 27 17:02:17.012132 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 17:02:17.026608 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 17:02:17.026706 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:02:17.036322 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:02:17.036386 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:02:17.044128 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 17:02:17.044189 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 17:02:17.048563 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 17:02:17.048614 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:02:17.061747 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:02:17.072239 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 17:02:17.072312 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 17:02:17.111469 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 17:02:17.112106 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:02:17.122489 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 17:02:17.122546 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 17:02:17.133024 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 17:02:17.133077 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:02:17.143401 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 17:02:17.143465 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 17:02:17.162509 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 17:02:17.162590 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 17:02:17.176745 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 17:02:17.176805 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:02:17.193864 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 17:02:17.210574 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 17:02:17.210656 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:02:17.224724 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 17:02:17.224790 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:02:17.240405 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 17:02:17.240475 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:02:17.257917 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 17:02:17.257986 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:02:17.263821 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:02:17.263900 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:02:17.280678 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 17:02:17.280737 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 27 17:02:17.280760 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 17:02:17.280785 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:02:17.281113 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 17:02:17.281221 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 17:02:17.291492 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 17:02:17.291579 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 17:02:20.226543 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 17:02:20.226681 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 17:02:20.235307 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 17:02:20.239962 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 17:02:20.240049 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 17:02:20.249024 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 17:02:20.273039 systemd[1]: Switching root. May 27 17:02:20.346885 systemd-journald[224]: Journal stopped May 27 17:02:30.304003 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). May 27 17:02:30.304025 kernel: SELinux: policy capability network_peer_controls=1 May 27 17:02:30.304033 kernel: SELinux: policy capability open_perms=1 May 27 17:02:30.304039 kernel: SELinux: policy capability extended_socket_class=1 May 27 17:02:30.304044 kernel: SELinux: policy capability always_check_network=0 May 27 17:02:30.304049 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 17:02:30.304055 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 17:02:30.304061 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 17:02:30.304066 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 17:02:30.304071 kernel: SELinux: policy capability userspace_initial_context=0 May 27 17:02:30.304077 kernel: audit: type=1403 audit(1748365341.011:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 17:02:30.304083 systemd[1]: Successfully loaded SELinux policy in 150.659ms. May 27 17:02:30.304089 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.325ms. May 27 17:02:30.304096 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:02:30.304102 systemd[1]: Detected virtualization microsoft. May 27 17:02:30.304109 systemd[1]: Detected architecture arm64. May 27 17:02:30.304115 systemd[1]: Detected first boot. May 27 17:02:30.304121 systemd[1]: Hostname set to . May 27 17:02:30.304128 systemd[1]: Initializing machine ID from random generator. May 27 17:02:30.304134 zram_generator::config[1294]: No configuration found. May 27 17:02:30.304140 kernel: NET: Registered PF_VSOCK protocol family May 27 17:02:30.304146 systemd[1]: Populated /etc with preset unit settings. May 27 17:02:30.304153 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 17:02:30.304159 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 17:02:30.304165 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 17:02:30.304170 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 17:02:30.304177 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 17:02:30.304183 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 17:02:30.304189 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 17:02:30.304196 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 17:02:30.304202 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 17:02:30.304208 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 17:02:30.304214 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 17:02:30.304220 systemd[1]: Created slice user.slice - User and Session Slice. May 27 17:02:30.304226 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:02:30.304232 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:02:30.304238 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 17:02:30.304245 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 17:02:30.304251 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 17:02:30.304258 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:02:30.304265 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 27 17:02:30.304272 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:02:30.304278 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:02:30.304284 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 17:02:30.304290 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 17:02:30.304297 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 17:02:30.304303 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 17:02:30.304309 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:02:30.304315 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:02:30.304321 systemd[1]: Reached target slices.target - Slice Units. May 27 17:02:30.304327 systemd[1]: Reached target swap.target - Swaps. May 27 17:02:30.304333 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 17:02:30.304339 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 17:02:30.304346 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 17:02:30.304352 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:02:30.304358 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:02:30.304364 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:02:30.304371 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 17:02:30.304378 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 17:02:30.304384 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 17:02:30.304391 systemd[1]: Mounting media.mount - External Media Directory... May 27 17:02:30.304397 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 17:02:30.304403 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 17:02:30.304409 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 17:02:30.304416 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 17:02:30.304422 systemd[1]: Reached target machines.target - Containers. May 27 17:02:30.304429 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 17:02:30.304435 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:02:30.304442 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:02:30.304657 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 17:02:30.304676 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:02:30.304683 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:02:30.304690 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:02:30.304697 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 17:02:30.304707 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:02:30.304713 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 17:02:30.304719 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 17:02:30.304726 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 17:02:30.304732 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 17:02:30.304740 systemd[1]: Stopped systemd-fsck-usr.service. May 27 17:02:30.304746 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:02:30.304753 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:02:30.304760 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:02:30.304767 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:02:30.304773 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 17:02:30.304779 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 17:02:30.304811 systemd-journald[1369]: Collecting audit messages is disabled. May 27 17:02:30.304828 systemd-journald[1369]: Journal started May 27 17:02:30.304843 systemd-journald[1369]: Runtime Journal (/run/log/journal/90996b8de90441b6a05757b938cf717f) is 8M, max 78.5M, 70.5M free. May 27 17:02:30.313101 kernel: loop: module loaded May 27 17:02:29.265244 systemd[1]: Queued start job for default target multi-user.target. May 27 17:02:29.270622 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 27 17:02:29.270980 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 17:02:29.271315 systemd[1]: systemd-journald.service: Consumed 2.547s CPU time. May 27 17:02:30.318093 kernel: fuse: init (API version 7.41) May 27 17:02:30.334026 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:02:30.334106 systemd[1]: verity-setup.service: Deactivated successfully. May 27 17:02:30.341387 systemd[1]: Stopped verity-setup.service. May 27 17:02:30.353776 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:02:30.354977 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 17:02:30.359707 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 17:02:30.364330 systemd[1]: Mounted media.mount - External Media Directory. May 27 17:02:30.368539 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 17:02:30.373498 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 17:02:30.379202 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 17:02:30.388904 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:02:30.394656 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 17:02:30.394843 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 17:02:30.399723 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:02:30.399876 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:02:30.405120 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:02:30.405266 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:02:30.411086 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 17:02:30.411242 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 17:02:30.416122 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:02:30.416271 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:02:30.421260 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:02:30.426635 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:02:30.432482 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 17:02:30.445225 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:02:30.450899 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 17:02:30.464143 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 17:02:30.469502 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 17:02:30.469630 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:02:30.475444 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 17:02:30.481534 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 17:02:30.486150 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:02:30.595224 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 17:02:30.605310 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 17:02:30.609778 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:02:30.611526 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 17:02:30.617187 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:02:30.618527 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:02:30.624189 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 17:02:30.630863 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:02:30.636774 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 17:02:30.644274 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 17:02:30.654590 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 17:02:30.670018 kernel: ACPI: bus type drm_connector registered May 27 17:02:30.670418 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:02:30.670600 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:02:30.694172 systemd-journald[1369]: Time spent on flushing to /var/log/journal/90996b8de90441b6a05757b938cf717f is 10.941ms for 939 entries. May 27 17:02:30.694172 systemd-journald[1369]: System Journal (/var/log/journal/90996b8de90441b6a05757b938cf717f) is 8M, max 2.6G, 2.6G free. May 27 17:02:31.039889 systemd-journald[1369]: Received client request to flush runtime journal. May 27 17:02:30.714894 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:02:31.028741 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 17:02:31.035013 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 17:02:31.043589 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 17:02:31.050100 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 17:02:31.060254 kernel: loop0: detected capacity change from 0 to 138376 May 27 17:02:31.063203 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 17:02:31.093925 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:02:31.637833 systemd-tmpfiles[1424]: ACLs are not supported, ignoring. May 27 17:02:31.637846 systemd-tmpfiles[1424]: ACLs are not supported, ignoring. May 27 17:02:31.644049 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:02:31.651053 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 17:02:31.653075 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 17:02:31.664704 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 17:02:32.751013 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 17:02:32.925011 kernel: loop1: detected capacity change from 0 to 211168 May 27 17:02:32.926466 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 17:02:32.933312 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:02:32.957677 systemd-tmpfiles[1454]: ACLs are not supported, ignoring. May 27 17:02:32.957694 systemd-tmpfiles[1454]: ACLs are not supported, ignoring. May 27 17:02:32.963107 kernel: loop2: detected capacity change from 0 to 107312 May 27 17:02:32.965024 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:02:34.937019 kernel: loop3: detected capacity change from 0 to 28936 May 27 17:02:34.955107 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 17:02:34.961704 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:02:34.989078 systemd-udevd[1459]: Using default interface naming scheme 'v255'. May 27 17:02:35.789022 kernel: loop4: detected capacity change from 0 to 138376 May 27 17:02:35.797045 kernel: loop5: detected capacity change from 0 to 211168 May 27 17:02:35.804045 kernel: loop6: detected capacity change from 0 to 107312 May 27 17:02:35.811028 kernel: loop7: detected capacity change from 0 to 28936 May 27 17:02:35.813106 (sd-merge)[1461]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 27 17:02:35.813518 (sd-merge)[1461]: Merged extensions into '/usr'. May 27 17:02:35.816913 systemd[1]: Reload requested from client PID 1422 ('systemd-sysext') (unit systemd-sysext.service)... May 27 17:02:35.817084 systemd[1]: Reloading... May 27 17:02:35.874033 zram_generator::config[1483]: No configuration found. May 27 17:02:36.089556 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:02:36.174346 systemd[1]: Reloading finished in 356 ms. May 27 17:02:36.205875 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 17:02:36.223357 systemd[1]: Starting ensure-sysext.service... May 27 17:02:36.227660 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:02:36.242827 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 17:02:36.242854 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 17:02:36.243019 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 17:02:36.243151 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 17:02:36.243559 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 17:02:36.243697 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. May 27 17:02:36.243727 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. May 27 17:02:36.452911 systemd-tmpfiles[1543]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:02:36.452927 systemd-tmpfiles[1543]: Skipping /boot May 27 17:02:36.459512 systemd[1]: Reload requested from client PID 1542 ('systemctl') (unit ensure-sysext.service)... May 27 17:02:36.459648 systemd[1]: Reloading... May 27 17:02:36.460255 systemd-tmpfiles[1543]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:02:36.460268 systemd-tmpfiles[1543]: Skipping /boot May 27 17:02:36.512038 zram_generator::config[1571]: No configuration found. May 27 17:02:36.665378 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:02:36.737734 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 27 17:02:36.737857 systemd[1]: Reloading finished in 277 ms. May 27 17:02:36.746057 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:02:36.756062 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:02:36.773033 systemd[1]: Finished ensure-sysext.service. May 27 17:02:36.778066 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... May 27 17:02:36.784462 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:02:36.801016 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#78 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 27 17:02:36.807030 kernel: mousedev: PS/2 mouse device common for all mice May 27 17:02:36.831212 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 17:02:36.837356 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:02:36.839790 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:02:36.853203 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:02:36.859310 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:02:36.867760 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:02:36.874426 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:02:36.874478 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:02:36.875887 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 17:02:36.901893 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:02:36.910311 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:02:36.915914 systemd[1]: Reached target time-set.target - System Time Set. May 27 17:02:36.923054 kernel: hv_vmbus: registering driver hv_balloon May 27 17:02:36.923140 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 27 17:02:36.926408 kernel: hv_balloon: Memory hot add disabled on ARM64 May 27 17:02:36.929013 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 17:02:36.936908 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:02:36.937102 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:02:36.948067 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:02:36.948228 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:02:36.953063 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:02:36.953219 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:02:36.960482 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:02:36.961325 kernel: hv_vmbus: registering driver hyperv_fb May 27 17:02:36.961404 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 27 17:02:36.968953 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 27 17:02:36.969299 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:02:36.977000 kernel: Console: switching to colour dummy device 80x25 May 27 17:02:36.977687 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. May 27 17:02:36.982040 kernel: Console: switching to colour frame buffer device 128x48 May 27 17:02:36.990091 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:02:36.990277 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:02:36.993352 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:02:37.015847 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:02:37.016064 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:02:37.026499 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:02:37.054950 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:02:37.055175 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:02:37.061424 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:02:37.064514 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:02:37.207474 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 17:02:37.212147 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 17:02:37.264498 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 17:02:37.296936 augenrules[1744]: No rules May 27 17:02:37.298515 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:02:37.298719 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:02:37.389061 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 27 17:02:37.405202 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 17:02:37.743775 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 17:02:37.759385 kernel: MACsec IEEE 802.1AE May 27 17:02:37.755814 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 17:02:37.758271 systemd-resolved[1693]: Positive Trust Anchors: May 27 17:02:37.758282 systemd-resolved[1693]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:02:37.758302 systemd-resolved[1693]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:02:37.762041 systemd-resolved[1693]: Using system hostname 'ci-4344.0.0-a-910621710e'. May 27 17:02:37.763601 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:02:37.768613 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:02:38.302315 systemd-networkd[1692]: lo: Link UP May 27 17:02:38.302321 systemd-networkd[1692]: lo: Gained carrier May 27 17:02:38.304023 systemd-networkd[1692]: Enumeration completed May 27 17:02:38.304314 systemd-networkd[1692]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:02:38.304317 systemd-networkd[1692]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:02:38.304557 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:02:38.309192 systemd[1]: Reached target network.target - Network. May 27 17:02:38.315146 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 17:02:38.322799 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 17:02:38.364025 kernel: mlx5_core 9a0c:00:02.0 enP39436s1: Link up May 27 17:02:38.367298 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 17:02:38.373035 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 17:02:38.387027 kernel: hv_netvsc 000d3ac6-026f-000d-3ac6-026f000d3ac6 eth0: Data path switched to VF: enP39436s1 May 27 17:02:38.388287 systemd-networkd[1692]: enP39436s1: Link UP May 27 17:02:38.388407 systemd-networkd[1692]: eth0: Link UP May 27 17:02:38.388409 systemd-networkd[1692]: eth0: Gained carrier May 27 17:02:38.388431 systemd-networkd[1692]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:02:38.390226 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 17:02:38.396573 systemd-networkd[1692]: enP39436s1: Gained carrier May 27 17:02:38.401048 systemd-networkd[1692]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 27 17:02:38.489973 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:02:39.711184 systemd-networkd[1692]: enP39436s1: Gained IPv6LL May 27 17:02:39.903125 systemd-networkd[1692]: eth0: Gained IPv6LL May 27 17:02:39.905709 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 17:02:39.912726 systemd[1]: Reached target network-online.target - Network is Online. May 27 17:02:46.727453 ldconfig[1403]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 17:02:46.745384 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 17:02:46.751950 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 17:02:46.792345 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 17:02:46.797160 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:02:46.801679 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 17:02:46.806734 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 17:02:46.812271 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 17:02:46.817156 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 17:02:46.822140 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 17:02:46.827089 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 17:02:46.827124 systemd[1]: Reached target paths.target - Path Units. May 27 17:02:46.830527 systemd[1]: Reached target timers.target - Timer Units. May 27 17:02:46.837059 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 17:02:46.842736 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 17:02:46.848854 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 17:02:46.854086 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 17:02:46.858974 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 17:02:46.873850 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 17:02:46.878412 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 17:02:46.883779 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 17:02:46.888127 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:02:46.891806 systemd[1]: Reached target basic.target - Basic System. May 27 17:02:46.895757 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 17:02:46.895785 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 17:02:46.898123 systemd[1]: Starting chronyd.service - NTP client/server... May 27 17:02:46.902432 systemd[1]: Starting containerd.service - containerd container runtime... May 27 17:02:46.911758 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 17:02:46.922439 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 17:02:46.930414 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 17:02:46.940547 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 17:02:46.948363 (chronyd)[1829]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 27 17:02:46.949364 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 17:02:46.954139 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 17:02:46.961838 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. May 27 17:02:46.966318 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). May 27 17:02:46.967704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:02:46.973803 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 17:02:46.982207 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 17:02:46.987025 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 17:02:46.996663 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 17:02:47.014906 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 17:02:47.021424 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 17:02:47.028625 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 17:02:47.029291 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 17:02:47.032182 systemd[1]: Starting update-engine.service - Update Engine... May 27 17:02:47.038195 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 17:02:47.054079 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 17:02:47.089686 KVP[1839]: KVP starting; pid is:1839 May 27 17:02:47.093730 KVP[1839]: KVP LIC Version: 3.1 May 27 17:02:47.094027 kernel: hv_utils: KVP IC version 4.0 May 27 17:02:47.095901 chronyd[1862]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 27 17:02:47.138813 chronyd[1862]: Timezone right/UTC failed leap second check, ignoring May 27 17:02:47.139034 chronyd[1862]: Loaded seccomp filter (level 2) May 27 17:02:47.141499 systemd[1]: Started chronyd.service - NTP client/server. May 27 17:02:47.153353 systemd[1]: motdgen.service: Deactivated successfully. May 27 17:02:47.159428 (ntainerd)[1871]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 17:02:47.159720 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 17:02:47.180297 jq[1837]: false May 27 17:02:47.181410 jq[1853]: true May 27 17:02:47.181002 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 17:02:47.181205 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 17:02:47.192656 jq[1875]: true May 27 17:02:47.488679 systemd-logind[1849]: New seat seat0. May 27 17:02:47.491315 systemd-logind[1849]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) May 27 17:02:47.493275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:02:47.505617 systemd[1]: Started systemd-logind.service - User Login Management. May 27 17:02:47.512452 (kubelet)[1898]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:02:47.515047 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 17:02:47.515242 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 17:02:47.530529 tar[1856]: linux-arm64/LICENSE May 27 17:02:47.530819 tar[1856]: linux-arm64/helm May 27 17:02:47.602925 extend-filesystems[1838]: Found loop4 May 27 17:02:47.602925 extend-filesystems[1838]: Found loop5 May 27 17:02:47.976631 update_engine[1852]: I20250527 17:02:47.622594 1852 main.cc:92] Flatcar Update Engine starting May 27 17:02:47.977057 kubelet[1898]: E0527 17:02:47.788052 1898 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:02:47.623308 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 17:02:47.977466 extend-filesystems[1838]: Found loop6 May 27 17:02:47.977466 extend-filesystems[1838]: Found loop7 May 27 17:02:47.977466 extend-filesystems[1838]: Found sda May 27 17:02:47.977466 extend-filesystems[1838]: Found sda1 May 27 17:02:47.977466 extend-filesystems[1838]: Found sda2 May 27 17:02:47.977466 extend-filesystems[1838]: Found sda3 May 27 17:02:47.977466 extend-filesystems[1838]: Found usr May 27 17:02:47.977466 extend-filesystems[1838]: Found sda4 May 27 17:02:47.977466 extend-filesystems[1838]: Found sda6 May 27 17:02:47.977466 extend-filesystems[1838]: Found sda7 May 27 17:02:47.977466 extend-filesystems[1838]: Found sda9 May 27 17:02:47.977466 extend-filesystems[1838]: Checking size of /dev/sda9 May 27 17:02:47.790185 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:02:47.790304 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:02:47.790567 systemd[1]: kubelet.service: Consumed 557ms CPU time, 256.2M memory peak. May 27 17:02:48.028093 extend-filesystems[1838]: Old size kept for /dev/sda9 May 27 17:02:48.028093 extend-filesystems[1838]: Found sr0 May 27 17:02:48.021974 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 17:02:48.022218 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 17:02:48.058137 bash[1915]: Updated "/home/core/.ssh/authorized_keys" May 27 17:02:48.060110 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 17:02:48.074180 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 17:02:48.354003 tar[1856]: linux-arm64/README.md May 27 17:02:48.370096 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 17:02:48.426614 sshd_keygen[1865]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 17:02:48.439570 dbus-daemon[1835]: [system] SELinux support is enabled May 27 17:02:48.439787 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 17:02:48.447171 update_engine[1852]: I20250527 17:02:48.446731 1852 update_check_scheduler.cc:74] Next update check in 2m3s May 27 17:02:48.449333 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 17:02:48.455648 dbus-daemon[1835]: [system] Successfully activated service 'org.freedesktop.systemd1' May 27 17:02:48.456857 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 17:02:48.461310 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 17:02:48.461350 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 17:02:48.467329 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 17:02:48.467356 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 17:02:48.476788 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 27 17:02:48.482922 systemd[1]: issuegen.service: Deactivated successfully. May 27 17:02:48.483189 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 17:02:48.498567 systemd[1]: Started update-engine.service - Update Engine. May 27 17:02:48.511278 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 17:02:48.519704 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 17:02:48.523667 coreos-metadata[1831]: May 27 17:02:48.523 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 27 17:02:48.526355 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 27 17:02:48.531535 coreos-metadata[1831]: May 27 17:02:48.531 INFO Fetch successful May 27 17:02:48.531535 coreos-metadata[1831]: May 27 17:02:48.531 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 27 17:02:48.538377 coreos-metadata[1831]: May 27 17:02:48.537 INFO Fetch successful May 27 17:02:48.538377 coreos-metadata[1831]: May 27 17:02:48.538 INFO Fetching http://168.63.129.16/machine/bb66bb81-a7cc-4795-acff-d8053b8b2a6c/679b9bf5%2De195%2D4ec8%2D9020%2Dfb3bb31b0a25.%5Fci%2D4344.0.0%2Da%2D910621710e?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 27 17:02:48.539034 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 17:02:48.544137 coreos-metadata[1831]: May 27 17:02:48.544 INFO Fetch successful May 27 17:02:48.544947 coreos-metadata[1831]: May 27 17:02:48.544 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 27 17:02:48.550351 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 17:02:48.558310 coreos-metadata[1831]: May 27 17:02:48.558 INFO Fetch successful May 27 17:02:48.565352 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 27 17:02:48.573613 systemd[1]: Reached target getty.target - Login Prompts. May 27 17:02:48.610065 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 17:02:48.616326 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 17:02:49.833645 locksmithd[2005]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 17:02:49.946141 containerd[1871]: time="2025-05-27T17:02:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 17:02:49.949019 containerd[1871]: time="2025-05-27T17:02:49.948170436Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 17:02:49.954027 containerd[1871]: time="2025-05-27T17:02:49.953424460Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.336µs" May 27 17:02:49.954027 containerd[1871]: time="2025-05-27T17:02:49.953471156Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 17:02:49.954027 containerd[1871]: time="2025-05-27T17:02:49.953487276Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 17:02:49.954027 containerd[1871]: time="2025-05-27T17:02:49.953680308Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 17:02:49.954027 containerd[1871]: time="2025-05-27T17:02:49.953693740Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 17:02:49.954027 containerd[1871]: time="2025-05-27T17:02:49.953713340Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:02:49.954027 containerd[1871]: time="2025-05-27T17:02:49.953754964Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:02:49.954027 containerd[1871]: time="2025-05-27T17:02:49.953761532Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:02:49.954027 containerd[1871]: time="2025-05-27T17:02:49.953976524Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:02:49.954280 containerd[1871]: time="2025-05-27T17:02:49.953985940Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:02:49.954327 containerd[1871]: time="2025-05-27T17:02:49.954316388Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:02:49.954370 containerd[1871]: time="2025-05-27T17:02:49.954357500Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 17:02:49.954507 containerd[1871]: time="2025-05-27T17:02:49.954492852Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 17:02:49.954773 containerd[1871]: time="2025-05-27T17:02:49.954749428Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:02:49.954865 containerd[1871]: time="2025-05-27T17:02:49.954852740Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:02:49.954913 containerd[1871]: time="2025-05-27T17:02:49.954901196Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 17:02:49.955003 containerd[1871]: time="2025-05-27T17:02:49.954978260Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 17:02:49.955228 containerd[1871]: time="2025-05-27T17:02:49.955210508Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 17:02:49.955369 containerd[1871]: time="2025-05-27T17:02:49.955354796Z" level=info msg="metadata content store policy set" policy=shared May 27 17:02:50.232052 containerd[1871]: time="2025-05-27T17:02:50.231943804Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 17:02:50.232052 containerd[1871]: time="2025-05-27T17:02:50.232042268Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 17:02:50.232052 containerd[1871]: time="2025-05-27T17:02:50.232069868Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 17:02:50.232241 containerd[1871]: time="2025-05-27T17:02:50.232084332Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 17:02:50.232241 containerd[1871]: time="2025-05-27T17:02:50.232096420Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 17:02:50.232241 containerd[1871]: time="2025-05-27T17:02:50.232107532Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 17:02:50.232241 containerd[1871]: time="2025-05-27T17:02:50.232116580Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 17:02:50.232241 containerd[1871]: time="2025-05-27T17:02:50.232125524Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 17:02:50.232241 containerd[1871]: time="2025-05-27T17:02:50.232133052Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 17:02:50.232241 containerd[1871]: time="2025-05-27T17:02:50.232139420Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 17:02:50.232241 containerd[1871]: time="2025-05-27T17:02:50.232145108Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 17:02:50.232241 containerd[1871]: time="2025-05-27T17:02:50.232155260Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 17:02:50.232357 containerd[1871]: time="2025-05-27T17:02:50.232327172Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 17:02:50.232357 containerd[1871]: time="2025-05-27T17:02:50.232343292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 17:02:50.232357 containerd[1871]: time="2025-05-27T17:02:50.232354196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 17:02:50.232390 containerd[1871]: time="2025-05-27T17:02:50.232361668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 17:02:50.232390 containerd[1871]: time="2025-05-27T17:02:50.232370116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 17:02:50.232390 containerd[1871]: time="2025-05-27T17:02:50.232377380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 17:02:50.232390 containerd[1871]: time="2025-05-27T17:02:50.232384348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 17:02:50.232438 containerd[1871]: time="2025-05-27T17:02:50.232390732Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 17:02:50.232438 containerd[1871]: time="2025-05-27T17:02:50.232399404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 17:02:50.232438 containerd[1871]: time="2025-05-27T17:02:50.232405548Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 17:02:50.232438 containerd[1871]: time="2025-05-27T17:02:50.232411588Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 17:02:50.232492 containerd[1871]: time="2025-05-27T17:02:50.232482228Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 17:02:50.232505 containerd[1871]: time="2025-05-27T17:02:50.232497884Z" level=info msg="Start snapshots syncer" May 27 17:02:50.232567 containerd[1871]: time="2025-05-27T17:02:50.232519116Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 17:02:50.232764 containerd[1871]: time="2025-05-27T17:02:50.232733324Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 17:02:50.232889 containerd[1871]: time="2025-05-27T17:02:50.232775692Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 17:02:50.232889 containerd[1871]: time="2025-05-27T17:02:50.232841268Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 17:02:50.232985 containerd[1871]: time="2025-05-27T17:02:50.232964836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 17:02:50.233033 containerd[1871]: time="2025-05-27T17:02:50.232985820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 17:02:50.233033 containerd[1871]: time="2025-05-27T17:02:50.233012772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 17:02:50.233033 containerd[1871]: time="2025-05-27T17:02:50.233020708Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 17:02:50.233033 containerd[1871]: time="2025-05-27T17:02:50.233029060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 17:02:50.233090 containerd[1871]: time="2025-05-27T17:02:50.233035908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 17:02:50.233090 containerd[1871]: time="2025-05-27T17:02:50.233043404Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 17:02:50.233090 containerd[1871]: time="2025-05-27T17:02:50.233063580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 17:02:50.233090 containerd[1871]: time="2025-05-27T17:02:50.233071852Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 17:02:50.233090 containerd[1871]: time="2025-05-27T17:02:50.233081700Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 17:02:50.233174 containerd[1871]: time="2025-05-27T17:02:50.233115132Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:02:50.233174 containerd[1871]: time="2025-05-27T17:02:50.233127236Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:02:50.233174 containerd[1871]: time="2025-05-27T17:02:50.233132980Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:02:50.233174 containerd[1871]: time="2025-05-27T17:02:50.233138588Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:02:50.233174 containerd[1871]: time="2025-05-27T17:02:50.233143300Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 17:02:50.233174 containerd[1871]: time="2025-05-27T17:02:50.233148764Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 17:02:50.233174 containerd[1871]: time="2025-05-27T17:02:50.233156116Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 17:02:50.233174 containerd[1871]: time="2025-05-27T17:02:50.233169300Z" level=info msg="runtime interface created" May 27 17:02:50.233174 containerd[1871]: time="2025-05-27T17:02:50.233172468Z" level=info msg="created NRI interface" May 27 17:02:50.233174 containerd[1871]: time="2025-05-27T17:02:50.233178500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 17:02:50.233174 containerd[1871]: time="2025-05-27T17:02:50.233188308Z" level=info msg="Connect containerd service" May 27 17:02:50.233174 containerd[1871]: time="2025-05-27T17:02:50.233209564Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 17:02:50.233991 containerd[1871]: time="2025-05-27T17:02:50.233958332Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:02:52.839040 containerd[1871]: time="2025-05-27T17:02:52.838855852Z" level=info msg="Start subscribing containerd event" May 27 17:02:52.839040 containerd[1871]: time="2025-05-27T17:02:52.838937324Z" level=info msg="Start recovering state" May 27 17:02:52.839655 containerd[1871]: time="2025-05-27T17:02:52.839068452Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 17:02:52.839655 containerd[1871]: time="2025-05-27T17:02:52.839241972Z" level=info msg="Start event monitor" May 27 17:02:52.839655 containerd[1871]: time="2025-05-27T17:02:52.839263388Z" level=info msg="Start cni network conf syncer for default" May 27 17:02:52.839655 containerd[1871]: time="2025-05-27T17:02:52.839273276Z" level=info msg="Start streaming server" May 27 17:02:52.839655 containerd[1871]: time="2025-05-27T17:02:52.839281116Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 17:02:52.839655 containerd[1871]: time="2025-05-27T17:02:52.839286244Z" level=info msg="runtime interface starting up..." May 27 17:02:52.839655 containerd[1871]: time="2025-05-27T17:02:52.839290244Z" level=info msg="starting plugins..." May 27 17:02:52.839655 containerd[1871]: time="2025-05-27T17:02:52.839303940Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 17:02:52.839655 containerd[1871]: time="2025-05-27T17:02:52.839450324Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 17:02:52.839655 containerd[1871]: time="2025-05-27T17:02:52.839525476Z" level=info msg="containerd successfully booted in 2.893796s" May 27 17:02:52.839815 systemd[1]: Started containerd.service - containerd container runtime. May 27 17:02:52.844882 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 17:02:52.850689 systemd[1]: Startup finished in 1.666s (kernel) + 16.001s (initrd) + 31.983s (userspace) = 49.650s. May 27 17:02:53.939763 login[2009]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 27 17:02:53.943588 login[2010]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 27 17:02:53.956176 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 17:02:53.957275 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 17:02:53.960540 systemd-logind[1849]: New session 2 of user core. May 27 17:02:53.964301 systemd-logind[1849]: New session 1 of user core. May 27 17:02:53.975705 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 17:02:53.978944 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 17:02:54.079444 (systemd)[2047]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 17:02:54.081930 systemd-logind[1849]: New session c1 of user core. May 27 17:02:55.665897 systemd[2047]: Queued start job for default target default.target. May 27 17:02:55.673850 systemd[2047]: Created slice app.slice - User Application Slice. May 27 17:02:55.673872 systemd[2047]: Reached target paths.target - Paths. May 27 17:02:55.673911 systemd[2047]: Reached target timers.target - Timers. May 27 17:02:55.675098 systemd[2047]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 17:02:55.682763 systemd[2047]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 17:02:55.682822 systemd[2047]: Reached target sockets.target - Sockets. May 27 17:02:55.682870 systemd[2047]: Reached target basic.target - Basic System. May 27 17:02:55.682892 systemd[2047]: Reached target default.target - Main User Target. May 27 17:02:55.682915 systemd[2047]: Startup finished in 1.591s. May 27 17:02:55.683043 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 17:02:55.684711 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 17:02:55.685281 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 17:02:57.014939 waagent[2006]: 2025-05-27T17:02:57.014844Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 May 27 17:02:57.019179 waagent[2006]: 2025-05-27T17:02:57.019120Z INFO Daemon Daemon OS: flatcar 4344.0.0 May 27 17:02:57.022372 waagent[2006]: 2025-05-27T17:02:57.022333Z INFO Daemon Daemon Python: 3.11.12 May 27 17:02:57.026650 waagent[2006]: 2025-05-27T17:02:57.026596Z INFO Daemon Daemon Run daemon May 27 17:02:57.030078 waagent[2006]: 2025-05-27T17:02:57.030034Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.0.0' May 27 17:02:57.036464 waagent[2006]: 2025-05-27T17:02:57.036171Z INFO Daemon Daemon Using waagent for provisioning May 27 17:02:57.040784 waagent[2006]: 2025-05-27T17:02:57.040743Z INFO Daemon Daemon Activate resource disk May 27 17:02:57.044451 waagent[2006]: 2025-05-27T17:02:57.044412Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 27 17:02:57.052461 waagent[2006]: 2025-05-27T17:02:57.052412Z INFO Daemon Daemon Found device: None May 27 17:02:57.056194 waagent[2006]: 2025-05-27T17:02:57.056130Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 27 17:02:57.062275 waagent[2006]: 2025-05-27T17:02:57.062235Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 27 17:02:57.071144 waagent[2006]: 2025-05-27T17:02:57.071071Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 27 17:02:57.075682 waagent[2006]: 2025-05-27T17:02:57.075632Z INFO Daemon Daemon Running default provisioning handler May 27 17:02:57.084958 waagent[2006]: 2025-05-27T17:02:57.084885Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 27 17:02:57.095879 waagent[2006]: 2025-05-27T17:02:57.095811Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 27 17:02:57.102811 waagent[2006]: 2025-05-27T17:02:57.102752Z INFO Daemon Daemon cloud-init is enabled: False May 27 17:02:57.106935 waagent[2006]: 2025-05-27T17:02:57.106684Z INFO Daemon Daemon Copying ovf-env.xml May 27 17:02:57.341313 waagent[2006]: 2025-05-27T17:02:57.341171Z INFO Daemon Daemon Successfully mounted dvd May 27 17:02:57.369246 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 27 17:02:57.372513 waagent[2006]: 2025-05-27T17:02:57.372417Z INFO Daemon Daemon Detect protocol endpoint May 27 17:02:57.376508 waagent[2006]: 2025-05-27T17:02:57.376446Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 27 17:02:57.380813 waagent[2006]: 2025-05-27T17:02:57.380772Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 27 17:02:57.385635 waagent[2006]: 2025-05-27T17:02:57.385603Z INFO Daemon Daemon Test for route to 168.63.129.16 May 27 17:02:57.389413 waagent[2006]: 2025-05-27T17:02:57.389375Z INFO Daemon Daemon Route to 168.63.129.16 exists May 27 17:02:57.393274 waagent[2006]: 2025-05-27T17:02:57.393241Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 27 17:02:57.693144 waagent[2006]: 2025-05-27T17:02:57.693091Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 27 17:02:57.698127 waagent[2006]: 2025-05-27T17:02:57.698086Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 27 17:02:57.701931 waagent[2006]: 2025-05-27T17:02:57.701893Z INFO Daemon Daemon Server preferred version:2015-04-05 May 27 17:02:57.853621 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 17:02:57.855473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:02:58.049586 waagent[2006]: 2025-05-27T17:02:58.049384Z INFO Daemon Daemon Initializing goal state during protocol detection May 27 17:02:58.055093 waagent[2006]: 2025-05-27T17:02:58.055020Z INFO Daemon Daemon Forcing an update of the goal state. May 27 17:02:58.063393 waagent[2006]: 2025-05-27T17:02:58.063340Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 27 17:02:58.125687 waagent[2006]: 2025-05-27T17:02:58.124529Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 27 17:02:58.128939 waagent[2006]: 2025-05-27T17:02:58.128881Z INFO Daemon May 27 17:02:58.130898 waagent[2006]: 2025-05-27T17:02:58.130856Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 023e83eb-4fef-4af6-81fa-9eee1bfab817 eTag: 18035478833395676529 source: Fabric] May 27 17:02:58.139428 waagent[2006]: 2025-05-27T17:02:58.139373Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 27 17:02:58.144208 waagent[2006]: 2025-05-27T17:02:58.144167Z INFO Daemon May 27 17:02:58.146181 waagent[2006]: 2025-05-27T17:02:58.146151Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 27 17:02:58.158686 waagent[2006]: 2025-05-27T17:02:58.158644Z INFO Daemon Daemon Downloading artifacts profile blob May 27 17:02:58.232114 waagent[2006]: 2025-05-27T17:02:58.232036Z INFO Daemon Downloaded certificate {'thumbprint': '32DCDE03E71946BD676DE7E66F0E85DF7A1CF01B', 'hasPrivateKey': False} May 27 17:02:58.239671 waagent[2006]: 2025-05-27T17:02:58.239614Z INFO Daemon Downloaded certificate {'thumbprint': '1DEDB5BA407B43F402AC88366CB3C7ACB3F6E995', 'hasPrivateKey': True} May 27 17:02:58.246977 waagent[2006]: 2025-05-27T17:02:58.246920Z INFO Daemon Fetch goal state completed May 27 17:02:58.257530 waagent[2006]: 2025-05-27T17:02:58.257480Z INFO Daemon Daemon Starting provisioning May 27 17:02:58.261667 waagent[2006]: 2025-05-27T17:02:58.261616Z INFO Daemon Daemon Handle ovf-env.xml. May 27 17:02:58.265215 waagent[2006]: 2025-05-27T17:02:58.265169Z INFO Daemon Daemon Set hostname [ci-4344.0.0-a-910621710e] May 27 17:02:58.295093 waagent[2006]: 2025-05-27T17:02:58.295026Z INFO Daemon Daemon Publish hostname [ci-4344.0.0-a-910621710e] May 27 17:02:58.299949 waagent[2006]: 2025-05-27T17:02:58.299842Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 27 17:02:58.304562 waagent[2006]: 2025-05-27T17:02:58.304519Z INFO Daemon Daemon Primary interface is [eth0] May 27 17:02:58.314950 systemd-networkd[1692]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:02:58.314957 systemd-networkd[1692]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:02:58.315030 systemd-networkd[1692]: eth0: DHCP lease lost May 27 17:02:58.316277 waagent[2006]: 2025-05-27T17:02:58.316214Z INFO Daemon Daemon Create user account if not exists May 27 17:02:58.320420 waagent[2006]: 2025-05-27T17:02:58.320362Z INFO Daemon Daemon User core already exists, skip useradd May 27 17:02:58.325266 waagent[2006]: 2025-05-27T17:02:58.325207Z INFO Daemon Daemon Configure sudoer May 27 17:02:58.344116 systemd-networkd[1692]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 27 17:02:58.379734 waagent[2006]: 2025-05-27T17:02:58.379524Z INFO Daemon Daemon Configure sshd May 27 17:02:58.392625 waagent[2006]: 2025-05-27T17:02:58.390294Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 27 17:02:58.402018 waagent[2006]: 2025-05-27T17:02:58.400099Z INFO Daemon Daemon Deploy ssh public key. May 27 17:02:58.873805 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:02:58.879502 (kubelet)[2110]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:02:58.907814 kubelet[2110]: E0527 17:02:58.907718 2110 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:02:58.911052 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:02:58.911338 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:02:58.911927 systemd[1]: kubelet.service: Consumed 124ms CPU time, 105.9M memory peak. May 27 17:02:59.621282 waagent[2006]: 2025-05-27T17:02:59.621229Z INFO Daemon Daemon Provisioning complete May 27 17:02:59.636000 waagent[2006]: 2025-05-27T17:02:59.635941Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 27 17:02:59.641794 waagent[2006]: 2025-05-27T17:02:59.641727Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 27 17:02:59.650009 waagent[2006]: 2025-05-27T17:02:59.649902Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent May 27 17:02:59.760015 waagent[2117]: 2025-05-27T17:02:59.759906Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) May 27 17:02:59.760902 waagent[2117]: 2025-05-27T17:02:59.760496Z INFO ExtHandler ExtHandler OS: flatcar 4344.0.0 May 27 17:02:59.760902 waagent[2117]: 2025-05-27T17:02:59.760563Z INFO ExtHandler ExtHandler Python: 3.11.12 May 27 17:02:59.760902 waagent[2117]: 2025-05-27T17:02:59.760606Z INFO ExtHandler ExtHandler CPU Arch: aarch64 May 27 17:02:59.783033 waagent[2117]: 2025-05-27T17:02:59.782376Z INFO ExtHandler ExtHandler Distro: flatcar-4344.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; May 27 17:02:59.783033 waagent[2117]: 2025-05-27T17:02:59.782610Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 27 17:02:59.783033 waagent[2117]: 2025-05-27T17:02:59.782662Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 27 17:02:59.792838 waagent[2117]: 2025-05-27T17:02:59.792739Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 27 17:02:59.801448 waagent[2117]: 2025-05-27T17:02:59.801396Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 27 17:02:59.801953 waagent[2117]: 2025-05-27T17:02:59.801904Z INFO ExtHandler May 27 17:02:59.802025 waagent[2117]: 2025-05-27T17:02:59.801980Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ebd5b938-e91d-4f31-b994-a42a72857a99 eTag: 18035478833395676529 source: Fabric] May 27 17:02:59.802291 waagent[2117]: 2025-05-27T17:02:59.802259Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 27 17:02:59.802731 waagent[2117]: 2025-05-27T17:02:59.802695Z INFO ExtHandler May 27 17:02:59.802762 waagent[2117]: 2025-05-27T17:02:59.802747Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 27 17:02:59.807092 waagent[2117]: 2025-05-27T17:02:59.807039Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 27 17:02:59.883801 waagent[2117]: 2025-05-27T17:02:59.883648Z INFO ExtHandler Downloaded certificate {'thumbprint': '32DCDE03E71946BD676DE7E66F0E85DF7A1CF01B', 'hasPrivateKey': False} May 27 17:02:59.884149 waagent[2117]: 2025-05-27T17:02:59.884110Z INFO ExtHandler Downloaded certificate {'thumbprint': '1DEDB5BA407B43F402AC88366CB3C7ACB3F6E995', 'hasPrivateKey': True} May 27 17:02:59.884518 waagent[2117]: 2025-05-27T17:02:59.884485Z INFO ExtHandler Fetch goal state completed May 27 17:02:59.901596 waagent[2117]: 2025-05-27T17:02:59.901510Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) May 27 17:02:59.905880 waagent[2117]: 2025-05-27T17:02:59.905778Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2117 May 27 17:02:59.906084 waagent[2117]: 2025-05-27T17:02:59.905947Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 27 17:02:59.908403 waagent[2117]: 2025-05-27T17:02:59.906304Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** May 27 17:02:59.908403 waagent[2117]: 2025-05-27T17:02:59.907540Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.0.0', '', 'Flatcar Container Linux by Kinvolk'] May 27 17:02:59.908403 waagent[2117]: 2025-05-27T17:02:59.907901Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported May 27 17:02:59.908403 waagent[2117]: 2025-05-27T17:02:59.908053Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 27 17:02:59.908599 waagent[2117]: 2025-05-27T17:02:59.908525Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 27 17:02:59.929767 waagent[2117]: 2025-05-27T17:02:59.929722Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 27 17:02:59.929968 waagent[2117]: 2025-05-27T17:02:59.929938Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 27 17:02:59.935796 waagent[2117]: 2025-05-27T17:02:59.935756Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 27 17:02:59.941740 systemd[1]: Reload requested from client PID 2134 ('systemctl') (unit waagent.service)... May 27 17:02:59.942132 systemd[1]: Reloading... May 27 17:03:00.029060 zram_generator::config[2180]: No configuration found. May 27 17:03:00.100959 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:03:00.185366 systemd[1]: Reloading finished in 242 ms. May 27 17:03:00.215959 waagent[2117]: 2025-05-27T17:03:00.213195Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 27 17:03:00.215959 waagent[2117]: 2025-05-27T17:03:00.213354Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 27 17:03:00.554098 waagent[2117]: 2025-05-27T17:03:00.553894Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 27 17:03:00.554315 waagent[2117]: 2025-05-27T17:03:00.554273Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 27 17:03:00.555099 waagent[2117]: 2025-05-27T17:03:00.555016Z INFO ExtHandler ExtHandler Starting env monitor service. May 27 17:03:00.555445 waagent[2117]: 2025-05-27T17:03:00.555399Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 27 17:03:00.555865 waagent[2117]: 2025-05-27T17:03:00.555817Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 27 17:03:00.556043 waagent[2117]: 2025-05-27T17:03:00.556012Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 27 17:03:00.556352 waagent[2117]: 2025-05-27T17:03:00.556330Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 27 17:03:00.556417 waagent[2117]: 2025-05-27T17:03:00.556289Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 27 17:03:00.556599 waagent[2117]: 2025-05-27T17:03:00.556560Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 27 17:03:00.556672 waagent[2117]: 2025-05-27T17:03:00.556646Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 27 17:03:00.556750 waagent[2117]: 2025-05-27T17:03:00.556726Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 27 17:03:00.557041 waagent[2117]: 2025-05-27T17:03:00.556978Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 27 17:03:00.557760 waagent[2117]: 2025-05-27T17:03:00.557731Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 27 17:03:00.557760 waagent[2117]: 2025-05-27T17:03:00.557692Z INFO EnvHandler ExtHandler Configure routes May 27 17:03:00.558190 waagent[2117]: 2025-05-27T17:03:00.558139Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 27 17:03:00.558712 waagent[2117]: 2025-05-27T17:03:00.558615Z INFO EnvHandler ExtHandler Gateway:None May 27 17:03:00.559221 waagent[2117]: 2025-05-27T17:03:00.559187Z INFO EnvHandler ExtHandler Routes:None May 27 17:03:00.559433 waagent[2117]: 2025-05-27T17:03:00.559375Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 27 17:03:00.559433 waagent[2117]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 27 17:03:00.559433 waagent[2117]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 27 17:03:00.559433 waagent[2117]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 27 17:03:00.559433 waagent[2117]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 27 17:03:00.559433 waagent[2117]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 27 17:03:00.559433 waagent[2117]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 27 17:03:00.565580 waagent[2117]: 2025-05-27T17:03:00.565525Z INFO ExtHandler ExtHandler May 27 17:03:00.565704 waagent[2117]: 2025-05-27T17:03:00.565614Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 744ee195-2c57-47a5-9d10-03ad59869d1b correlation 87fbeeb3-5851-40ff-9912-3e4bd8ed2dd1 created: 2025-05-27T17:01:34.215965Z] May 27 17:03:00.565970 waagent[2117]: 2025-05-27T17:03:00.565924Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 27 17:03:00.566451 waagent[2117]: 2025-05-27T17:03:00.566416Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] May 27 17:03:00.611219 waagent[2117]: 2025-05-27T17:03:00.610789Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command May 27 17:03:00.611219 waagent[2117]: Try `iptables -h' or 'iptables --help' for more information.) May 27 17:03:00.611400 waagent[2117]: 2025-05-27T17:03:00.611267Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 8CADFB3C-FE6F-4D1F-9C8D-37731C3C8E78;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] May 27 17:03:00.680088 waagent[2117]: 2025-05-27T17:03:00.679985Z INFO MonitorHandler ExtHandler Network interfaces: May 27 17:03:00.680088 waagent[2117]: Executing ['ip', '-a', '-o', 'link']: May 27 17:03:00.680088 waagent[2117]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 27 17:03:00.680088 waagent[2117]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c6:02:6f brd ff:ff:ff:ff:ff:ff May 27 17:03:00.680088 waagent[2117]: 3: enP39436s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c6:02:6f brd ff:ff:ff:ff:ff:ff\ altname enP39436p0s2 May 27 17:03:00.680088 waagent[2117]: Executing ['ip', '-4', '-a', '-o', 'address']: May 27 17:03:00.680088 waagent[2117]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 27 17:03:00.680088 waagent[2117]: 2: eth0 inet 10.200.20.14/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 27 17:03:00.680088 waagent[2117]: Executing ['ip', '-6', '-a', '-o', 'address']: May 27 17:03:00.680088 waagent[2117]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 27 17:03:00.680088 waagent[2117]: 2: eth0 inet6 fe80::20d:3aff:fec6:26f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 27 17:03:00.680088 waagent[2117]: 3: enP39436s1 inet6 fe80::20d:3aff:fec6:26f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 27 17:03:00.714890 waagent[2117]: 2025-05-27T17:03:00.714827Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: May 27 17:03:00.714890 waagent[2117]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 27 17:03:00.714890 waagent[2117]: pkts bytes target prot opt in out source destination May 27 17:03:00.714890 waagent[2117]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 27 17:03:00.714890 waagent[2117]: pkts bytes target prot opt in out source destination May 27 17:03:00.714890 waagent[2117]: Chain OUTPUT (policy ACCEPT 6 packets, 518 bytes) May 27 17:03:00.714890 waagent[2117]: pkts bytes target prot opt in out source destination May 27 17:03:00.714890 waagent[2117]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 27 17:03:00.714890 waagent[2117]: 6 698 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 27 17:03:00.714890 waagent[2117]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 27 17:03:00.718673 waagent[2117]: 2025-05-27T17:03:00.718609Z INFO EnvHandler ExtHandler Current Firewall rules: May 27 17:03:00.718673 waagent[2117]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 27 17:03:00.718673 waagent[2117]: pkts bytes target prot opt in out source destination May 27 17:03:00.718673 waagent[2117]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 27 17:03:00.718673 waagent[2117]: pkts bytes target prot opt in out source destination May 27 17:03:00.718673 waagent[2117]: Chain OUTPUT (policy ACCEPT 6 packets, 518 bytes) May 27 17:03:00.718673 waagent[2117]: pkts bytes target prot opt in out source destination May 27 17:03:00.718673 waagent[2117]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 27 17:03:00.718673 waagent[2117]: 9 1054 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 27 17:03:00.718673 waagent[2117]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 27 17:03:00.718936 waagent[2117]: 2025-05-27T17:03:00.718907Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 27 17:03:09.103604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 17:03:09.105658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:03:09.208714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:03:09.214298 (kubelet)[2269]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:03:09.338668 kubelet[2269]: E0527 17:03:09.338591 2269 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:03:09.341285 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:03:09.341566 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:03:09.342214 systemd[1]: kubelet.service: Consumed 115ms CPU time, 107.5M memory peak. May 27 17:03:10.924665 chronyd[1862]: Selected source PHC0 May 27 17:03:19.353648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 27 17:03:19.355568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:03:19.460619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:03:19.469347 (kubelet)[2284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:03:19.496882 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:03:19.926352 kubelet[2284]: E0527 17:03:19.494662 2284 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:03:19.497022 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:03:19.497588 systemd[1]: kubelet.service: Consumed 111ms CPU time, 105.1M memory peak. May 27 17:03:25.071556 kernel: hv_balloon: Max. dynamic memory size: 4096 MB May 27 17:03:29.603802 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 27 17:03:29.605879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:03:29.821805 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:03:29.824977 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:03:29.862378 kubelet[2298]: E0527 17:03:29.862161 2298 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:03:29.865955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:03:29.866354 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:03:29.867333 systemd[1]: kubelet.service: Consumed 128ms CPU time, 107.4M memory peak. May 27 17:03:33.233462 update_engine[1852]: I20250527 17:03:33.232933 1852 update_attempter.cc:509] Updating boot flags... May 27 17:03:35.963696 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 17:03:35.965145 systemd[1]: Started sshd@0-10.200.20.14:22-10.200.16.10:41432.service - OpenSSH per-connection server daemon (10.200.16.10:41432). May 27 17:03:36.582738 sshd[2370]: Accepted publickey for core from 10.200.16.10 port 41432 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:03:36.583961 sshd-session[2370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:36.588343 systemd-logind[1849]: New session 3 of user core. May 27 17:03:36.596162 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 17:03:36.991049 systemd[1]: Started sshd@1-10.200.20.14:22-10.200.16.10:41444.service - OpenSSH per-connection server daemon (10.200.16.10:41444). May 27 17:03:37.476367 sshd[2375]: Accepted publickey for core from 10.200.16.10 port 41444 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:03:37.477635 sshd-session[2375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:37.481576 systemd-logind[1849]: New session 4 of user core. May 27 17:03:37.491181 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 17:03:37.830392 sshd[2377]: Connection closed by 10.200.16.10 port 41444 May 27 17:03:37.830950 sshd-session[2375]: pam_unix(sshd:session): session closed for user core May 27 17:03:37.834538 systemd[1]: sshd@1-10.200.20.14:22-10.200.16.10:41444.service: Deactivated successfully. May 27 17:03:37.836408 systemd[1]: session-4.scope: Deactivated successfully. May 27 17:03:37.837301 systemd-logind[1849]: Session 4 logged out. Waiting for processes to exit. May 27 17:03:37.838688 systemd-logind[1849]: Removed session 4. May 27 17:03:37.925407 systemd[1]: Started sshd@2-10.200.20.14:22-10.200.16.10:41460.service - OpenSSH per-connection server daemon (10.200.16.10:41460). May 27 17:03:38.372796 sshd[2383]: Accepted publickey for core from 10.200.16.10 port 41460 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:03:38.374182 sshd-session[2383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:38.378747 systemd-logind[1849]: New session 5 of user core. May 27 17:03:38.388236 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 17:03:38.710081 sshd[2385]: Connection closed by 10.200.16.10 port 41460 May 27 17:03:38.710647 sshd-session[2383]: pam_unix(sshd:session): session closed for user core May 27 17:03:38.714504 systemd[1]: sshd@2-10.200.20.14:22-10.200.16.10:41460.service: Deactivated successfully. May 27 17:03:38.716412 systemd[1]: session-5.scope: Deactivated successfully. May 27 17:03:38.717164 systemd-logind[1849]: Session 5 logged out. Waiting for processes to exit. May 27 17:03:38.718771 systemd-logind[1849]: Removed session 5. May 27 17:03:38.792349 systemd[1]: Started sshd@3-10.200.20.14:22-10.200.16.10:41468.service - OpenSSH per-connection server daemon (10.200.16.10:41468). May 27 17:03:39.252612 sshd[2391]: Accepted publickey for core from 10.200.16.10 port 41468 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:03:39.253892 sshd-session[2391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:39.259018 systemd-logind[1849]: New session 6 of user core. May 27 17:03:39.264232 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 17:03:39.596694 sshd[2393]: Connection closed by 10.200.16.10 port 41468 May 27 17:03:39.596511 sshd-session[2391]: pam_unix(sshd:session): session closed for user core May 27 17:03:39.600920 systemd-logind[1849]: Session 6 logged out. Waiting for processes to exit. May 27 17:03:39.601146 systemd[1]: sshd@3-10.200.20.14:22-10.200.16.10:41468.service: Deactivated successfully. May 27 17:03:39.602983 systemd[1]: session-6.scope: Deactivated successfully. May 27 17:03:39.606132 systemd-logind[1849]: Removed session 6. May 27 17:03:39.688320 systemd[1]: Started sshd@4-10.200.20.14:22-10.200.16.10:54012.service - OpenSSH per-connection server daemon (10.200.16.10:54012). May 27 17:03:40.088698 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 27 17:03:40.090768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:03:40.173333 sshd[2399]: Accepted publickey for core from 10.200.16.10 port 54012 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:03:40.174708 sshd-session[2399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:40.182542 systemd-logind[1849]: New session 7 of user core. May 27 17:03:40.189189 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 17:03:40.488809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:03:40.493363 (kubelet)[2411]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:03:40.525089 kubelet[2411]: E0527 17:03:40.525011 2411 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:03:40.527572 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:03:40.527694 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:03:40.528021 systemd[1]: kubelet.service: Consumed 122ms CPU time, 105.3M memory peak. May 27 17:03:40.898129 sudo[2407]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 17:03:40.898376 sudo[2407]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:03:40.938208 sudo[2407]: pam_unix(sudo:session): session closed for user root May 27 17:03:41.015242 sshd[2404]: Connection closed by 10.200.16.10 port 54012 May 27 17:03:41.015085 sshd-session[2399]: pam_unix(sshd:session): session closed for user core May 27 17:03:41.019246 systemd-logind[1849]: Session 7 logged out. Waiting for processes to exit. May 27 17:03:41.019328 systemd[1]: sshd@4-10.200.20.14:22-10.200.16.10:54012.service: Deactivated successfully. May 27 17:03:41.021133 systemd[1]: session-7.scope: Deactivated successfully. May 27 17:03:41.023560 systemd-logind[1849]: Removed session 7. May 27 17:03:41.109580 systemd[1]: Started sshd@5-10.200.20.14:22-10.200.16.10:54018.service - OpenSSH per-connection server daemon (10.200.16.10:54018). May 27 17:03:41.558342 sshd[2423]: Accepted publickey for core from 10.200.16.10 port 54018 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:03:41.559639 sshd-session[2423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:41.564050 systemd-logind[1849]: New session 8 of user core. May 27 17:03:41.573175 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 17:03:41.811245 sudo[2427]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 17:03:41.812089 sudo[2427]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:03:41.823434 sudo[2427]: pam_unix(sudo:session): session closed for user root May 27 17:03:41.827968 sudo[2426]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 17:03:41.828271 sudo[2426]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:03:41.837302 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:03:41.870969 augenrules[2449]: No rules May 27 17:03:41.872437 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:03:41.872631 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:03:41.874050 sudo[2426]: pam_unix(sudo:session): session closed for user root May 27 17:03:41.961920 sshd[2425]: Connection closed by 10.200.16.10 port 54018 May 27 17:03:41.962508 sshd-session[2423]: pam_unix(sshd:session): session closed for user core May 27 17:03:41.966240 systemd[1]: sshd@5-10.200.20.14:22-10.200.16.10:54018.service: Deactivated successfully. May 27 17:03:41.969399 systemd[1]: session-8.scope: Deactivated successfully. May 27 17:03:41.970063 systemd-logind[1849]: Session 8 logged out. Waiting for processes to exit. May 27 17:03:41.971311 systemd-logind[1849]: Removed session 8. May 27 17:03:42.050074 systemd[1]: Started sshd@6-10.200.20.14:22-10.200.16.10:54026.service - OpenSSH per-connection server daemon (10.200.16.10:54026). May 27 17:03:42.542379 sshd[2458]: Accepted publickey for core from 10.200.16.10 port 54026 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:03:42.543621 sshd-session[2458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:42.547798 systemd-logind[1849]: New session 9 of user core. May 27 17:03:42.558222 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 17:03:42.816986 sudo[2461]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 17:03:42.817282 sudo[2461]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:03:44.603268 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 17:03:44.617550 (dockerd)[2479]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 17:03:45.623917 dockerd[2479]: time="2025-05-27T17:03:45.623296507Z" level=info msg="Starting up" May 27 17:03:45.625739 dockerd[2479]: time="2025-05-27T17:03:45.625703875Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 17:03:45.798553 dockerd[2479]: time="2025-05-27T17:03:45.798501962Z" level=info msg="Loading containers: start." May 27 17:03:45.835019 kernel: Initializing XFRM netlink socket May 27 17:03:46.218128 systemd-networkd[1692]: docker0: Link UP May 27 17:03:46.240689 dockerd[2479]: time="2025-05-27T17:03:46.240562282Z" level=info msg="Loading containers: done." May 27 17:03:46.252057 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4072736510-merged.mount: Deactivated successfully. May 27 17:03:46.277505 dockerd[2479]: time="2025-05-27T17:03:46.277078357Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 17:03:46.277505 dockerd[2479]: time="2025-05-27T17:03:46.277184925Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 17:03:46.277505 dockerd[2479]: time="2025-05-27T17:03:46.277327405Z" level=info msg="Initializing buildkit" May 27 17:03:46.329471 dockerd[2479]: time="2025-05-27T17:03:46.329420821Z" level=info msg="Completed buildkit initialization" May 27 17:03:46.335108 dockerd[2479]: time="2025-05-27T17:03:46.335058949Z" level=info msg="Daemon has completed initialization" May 27 17:03:46.336042 dockerd[2479]: time="2025-05-27T17:03:46.335299573Z" level=info msg="API listen on /run/docker.sock" May 27 17:03:46.335566 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 17:03:46.841755 containerd[1871]: time="2025-05-27T17:03:46.841713999Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 27 17:03:47.832257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3360832804.mount: Deactivated successfully. May 27 17:03:49.114047 containerd[1871]: time="2025-05-27T17:03:49.113372068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:03:49.131619 containerd[1871]: time="2025-05-27T17:03:49.131558994Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=27349350" May 27 17:03:49.148152 containerd[1871]: time="2025-05-27T17:03:49.148079815Z" level=info msg="ImageCreate event name:\"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:03:49.163173 containerd[1871]: time="2025-05-27T17:03:49.163044829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:03:49.163850 containerd[1871]: time="2025-05-27T17:03:49.163669045Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"27346150\" in 2.32191307s" May 27 17:03:49.163850 containerd[1871]: time="2025-05-27T17:03:49.163708789Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\"" May 27 17:03:49.165222 containerd[1871]: time="2025-05-27T17:03:49.164978693Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 27 17:03:50.603548 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 27 17:03:50.606435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:03:50.728713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:03:50.738606 (kubelet)[2743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:03:50.834841 kubelet[2743]: E0527 17:03:50.834793 2743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:03:50.838385 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:03:50.838515 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:03:50.838853 systemd[1]: kubelet.service: Consumed 121ms CPU time, 105.1M memory peak. May 27 17:03:51.176692 containerd[1871]: time="2025-05-27T17:03:51.176063521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:03:51.179572 containerd[1871]: time="2025-05-27T17:03:51.179532673Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=23531735" May 27 17:03:51.185942 containerd[1871]: time="2025-05-27T17:03:51.185911640Z" level=info msg="ImageCreate event name:\"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:03:51.199978 containerd[1871]: time="2025-05-27T17:03:51.199931175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:03:51.200655 containerd[1871]: time="2025-05-27T17:03:51.200592935Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"25086427\" in 2.035563842s" May 27 17:03:51.200655 containerd[1871]: time="2025-05-27T17:03:51.200625439Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\"" May 27 17:03:51.201616 containerd[1871]: time="2025-05-27T17:03:51.201400807Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 27 17:03:53.139765 containerd[1871]: time="2025-05-27T17:03:53.139709295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:03:53.146130 containerd[1871]: time="2025-05-27T17:03:53.146078543Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=18293731" May 27 17:03:53.152434 containerd[1871]: time="2025-05-27T17:03:53.152374382Z" level=info msg="ImageCreate event name:\"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:03:53.162531 containerd[1871]: time="2025-05-27T17:03:53.162468741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:03:53.163430 containerd[1871]: time="2025-05-27T17:03:53.162977277Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"19848441\" in 1.961548798s" May 27 17:03:53.163430 containerd[1871]: time="2025-05-27T17:03:53.163024733Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\"" May 27 17:03:53.163805 containerd[1871]: time="2025-05-27T17:03:53.163779509Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 27 17:03:54.270796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4157460984.mount: Deactivated successfully. May 27 17:03:54.581442 containerd[1871]: time="2025-05-27T17:03:54.581292651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:03:54.586416 containerd[1871]: time="2025-05-27T17:03:54.586364459Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=28196004" May 27 17:03:54.590908 containerd[1871]: time="2025-05-27T17:03:54.590852250Z" level=info msg="ImageCreate event name:\"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:03:54.596459 containerd[1871]: time="2025-05-27T17:03:54.596397130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:03:54.596769 containerd[1871]: time="2025-05-27T17:03:54.596652258Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"28195023\" in 1.432839077s" May 27 17:03:54.596769 containerd[1871]: time="2025-05-27T17:03:54.596692970Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\"" May 27 17:03:54.597191 containerd[1871]: time="2025-05-27T17:03:54.597169386Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 27 17:03:55.361937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3709942726.mount: Deactivated successfully. May 27 17:03:56.524029 containerd[1871]: time="2025-05-27T17:03:56.523845395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:03:56.529168 containerd[1871]: time="2025-05-27T17:03:56.529125410Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" May 27 17:03:56.535412 containerd[1871]: time="2025-05-27T17:03:56.535378202Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:03:56.546005 containerd[1871]: time="2025-05-27T17:03:56.545914417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:03:56.546805 containerd[1871]: time="2025-05-27T17:03:56.546635481Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.949436583s" May 27 17:03:56.546805 containerd[1871]: time="2025-05-27T17:03:56.546673729Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" May 27 17:03:56.547307 containerd[1871]: time="2025-05-27T17:03:56.547245809Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 17:03:57.175072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2745177645.mount: Deactivated successfully. May 27 17:03:57.219238 containerd[1871]: time="2025-05-27T17:03:57.219175632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:03:57.227551 containerd[1871]: time="2025-05-27T17:03:57.227499647Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 27 17:03:57.237503 containerd[1871]: time="2025-05-27T17:03:57.237427742Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:03:57.245369 containerd[1871]: time="2025-05-27T17:03:57.245306894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:03:57.245887 containerd[1871]: time="2025-05-27T17:03:57.245741598Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 698.336453ms" May 27 17:03:57.245887 containerd[1871]: time="2025-05-27T17:03:57.245777694Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 27 17:03:57.246373 containerd[1871]: time="2025-05-27T17:03:57.246335238Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 27 17:04:00.523336 containerd[1871]: time="2025-05-27T17:04:00.523204075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:00.537520 containerd[1871]: time="2025-05-27T17:04:00.537466620Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69230163" May 27 17:04:00.547905 containerd[1871]: time="2025-05-27T17:04:00.547842019Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:00.553650 containerd[1871]: time="2025-05-27T17:04:00.553570832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:00.554439 containerd[1871]: time="2025-05-27T17:04:00.554296370Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.307931556s" May 27 17:04:00.554439 containerd[1871]: time="2025-05-27T17:04:00.554333330Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" May 27 17:04:00.846907 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 27 17:04:00.849244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:04:00.982512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:04:00.991720 (kubelet)[2851]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:04:01.024543 kubelet[2851]: E0527 17:04:01.024500 2851 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:04:01.027461 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:04:01.027582 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:04:01.028075 systemd[1]: kubelet.service: Consumed 114ms CPU time, 104.9M memory peak. May 27 17:04:03.052552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:04:03.052755 systemd[1]: kubelet.service: Consumed 114ms CPU time, 104.9M memory peak. May 27 17:04:03.054877 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:04:03.080882 systemd[1]: Reload requested from client PID 2872 ('systemctl') (unit session-9.scope)... May 27 17:04:03.080899 systemd[1]: Reloading... May 27 17:04:03.172112 zram_generator::config[2915]: No configuration found. May 27 17:04:03.254840 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:04:03.339902 systemd[1]: Reloading finished in 258 ms. May 27 17:04:03.390788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:04:03.392719 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:04:03.396250 systemd[1]: kubelet.service: Deactivated successfully. May 27 17:04:03.396446 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:04:03.396501 systemd[1]: kubelet.service: Consumed 89ms CPU time, 95.2M memory peak. May 27 17:04:03.398197 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:04:14.954612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:04:14.964545 (kubelet)[2987]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:04:14.990518 kubelet[2987]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:04:14.990927 kubelet[2987]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:04:14.990965 kubelet[2987]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:04:14.991122 kubelet[2987]: I0527 17:04:14.991081 2987 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:04:15.478010 kubelet[2987]: I0527 17:04:15.477826 2987 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 17:04:15.478010 kubelet[2987]: I0527 17:04:15.477864 2987 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:04:15.478179 kubelet[2987]: I0527 17:04:15.478081 2987 server.go:956] "Client rotation is on, will bootstrap in background" May 27 17:04:15.490434 kubelet[2987]: E0527 17:04:15.490387 2987 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 27 17:04:15.492163 kubelet[2987]: I0527 17:04:15.491993 2987 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:04:15.500096 kubelet[2987]: I0527 17:04:15.500071 2987 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:04:15.503120 kubelet[2987]: I0527 17:04:15.502804 2987 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:04:15.503740 kubelet[2987]: I0527 17:04:15.503694 2987 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:04:15.503952 kubelet[2987]: I0527 17:04:15.503817 2987 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.0.0-a-910621710e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:04:15.504122 kubelet[2987]: I0527 17:04:15.504107 2987 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:04:15.504173 kubelet[2987]: I0527 17:04:15.504165 2987 container_manager_linux.go:303] "Creating device plugin manager" May 27 17:04:15.504923 kubelet[2987]: I0527 17:04:15.504733 2987 state_mem.go:36] "Initialized new in-memory state store" May 27 17:04:15.506509 kubelet[2987]: I0527 17:04:15.506487 2987 kubelet.go:480] "Attempting to sync node with API server" May 27 17:04:15.506612 kubelet[2987]: I0527 17:04:15.506601 2987 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:04:15.506713 kubelet[2987]: I0527 17:04:15.506702 2987 kubelet.go:386] "Adding apiserver pod source" May 27 17:04:15.506771 kubelet[2987]: I0527 17:04:15.506763 2987 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:04:15.509681 kubelet[2987]: E0527 17:04:15.509283 2987 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.0.0-a-910621710e&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 17:04:15.510445 kubelet[2987]: E0527 17:04:15.510416 2987 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 17:04:15.510614 kubelet[2987]: I0527 17:04:15.510525 2987 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:04:15.510954 kubelet[2987]: I0527 17:04:15.510936 2987 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 17:04:15.511026 kubelet[2987]: W0527 17:04:15.511014 2987 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 17:04:15.513329 kubelet[2987]: I0527 17:04:15.512924 2987 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:04:15.513329 kubelet[2987]: I0527 17:04:15.512970 2987 server.go:1289] "Started kubelet" May 27 17:04:15.513772 kubelet[2987]: I0527 17:04:15.513733 2987 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:04:15.514481 kubelet[2987]: I0527 17:04:15.514448 2987 server.go:317] "Adding debug handlers to kubelet server" May 27 17:04:15.515152 kubelet[2987]: I0527 17:04:15.515131 2987 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:04:15.518508 kubelet[2987]: I0527 17:04:15.518416 2987 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:04:15.518720 kubelet[2987]: I0527 17:04:15.518698 2987 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:04:15.520511 kubelet[2987]: I0527 17:04:15.520464 2987 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:04:15.522618 kubelet[2987]: E0527 17:04:15.521640 2987 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.14:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.0.0-a-910621710e.1843711e5969b26c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.0.0-a-910621710e,UID:ci-4344.0.0-a-910621710e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.0.0-a-910621710e,},FirstTimestamp:2025-05-27 17:04:15.512941164 +0000 UTC m=+0.544732718,LastTimestamp:2025-05-27 17:04:15.512941164 +0000 UTC m=+0.544732718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.0.0-a-910621710e,}" May 27 17:04:15.522755 kubelet[2987]: E0527 17:04:15.522743 2987 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.0.0-a-910621710e\" not found" May 27 17:04:15.523023 kubelet[2987]: I0527 17:04:15.522774 2987 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:04:15.523023 kubelet[2987]: I0527 17:04:15.522928 2987 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:04:15.523094 kubelet[2987]: I0527 17:04:15.523029 2987 reconciler.go:26] "Reconciler: start to sync state" May 27 17:04:15.523697 kubelet[2987]: E0527 17:04:15.523401 2987 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 17:04:15.525606 kubelet[2987]: E0527 17:04:15.525573 2987 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-a-910621710e?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="200ms" May 27 17:04:15.528803 kubelet[2987]: E0527 17:04:15.528763 2987 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:04:15.528938 kubelet[2987]: I0527 17:04:15.528886 2987 factory.go:223] Registration of the containerd container factory successfully May 27 17:04:15.528938 kubelet[2987]: I0527 17:04:15.528895 2987 factory.go:223] Registration of the systemd container factory successfully May 27 17:04:15.529115 kubelet[2987]: I0527 17:04:15.529018 2987 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:04:15.540157 kubelet[2987]: I0527 17:04:15.540131 2987 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:04:15.540157 kubelet[2987]: I0527 17:04:15.540146 2987 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:04:15.540157 kubelet[2987]: I0527 17:04:15.540166 2987 state_mem.go:36] "Initialized new in-memory state store" May 27 17:04:15.547421 kubelet[2987]: I0527 17:04:15.547389 2987 policy_none.go:49] "None policy: Start" May 27 17:04:15.547421 kubelet[2987]: I0527 17:04:15.547426 2987 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:04:15.547532 kubelet[2987]: I0527 17:04:15.547439 2987 state_mem.go:35] "Initializing new in-memory state store" May 27 17:04:15.561203 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 17:04:15.573380 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 17:04:15.576780 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 17:04:15.584034 kubelet[2987]: E0527 17:04:15.583973 2987 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 17:04:15.584236 kubelet[2987]: I0527 17:04:15.584219 2987 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:04:15.584660 kubelet[2987]: I0527 17:04:15.584235 2987 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:04:15.584660 kubelet[2987]: I0527 17:04:15.584487 2987 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:04:15.586283 kubelet[2987]: E0527 17:04:15.586246 2987 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:04:15.587162 kubelet[2987]: E0527 17:04:15.587137 2987 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.0.0-a-910621710e\" not found" May 27 17:04:15.632357 kubelet[2987]: I0527 17:04:15.632316 2987 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 17:04:15.633740 kubelet[2987]: I0527 17:04:15.633576 2987 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 17:04:15.633740 kubelet[2987]: I0527 17:04:15.633608 2987 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 17:04:15.633740 kubelet[2987]: I0527 17:04:15.633632 2987 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:04:15.633740 kubelet[2987]: I0527 17:04:15.633637 2987 kubelet.go:2436] "Starting kubelet main sync loop" May 27 17:04:15.633740 kubelet[2987]: E0527 17:04:15.633690 2987 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 27 17:04:15.635457 kubelet[2987]: E0527 17:04:15.635429 2987 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 17:04:15.686981 kubelet[2987]: I0527 17:04:15.686480 2987 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-910621710e" May 27 17:04:15.686981 kubelet[2987]: E0527 17:04:15.686862 2987 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4344.0.0-a-910621710e" May 27 17:04:15.726806 kubelet[2987]: E0527 17:04:15.726760 2987 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-a-910621710e?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="400ms" May 27 17:04:15.747969 systemd[1]: Created slice kubepods-burstable-podaa928be06c3ff701c3869f58fa8a2324.slice - libcontainer container kubepods-burstable-podaa928be06c3ff701c3869f58fa8a2324.slice. May 27 17:04:15.756359 kubelet[2987]: E0527 17:04:15.755863 2987 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-910621710e\" not found" node="ci-4344.0.0-a-910621710e" May 27 17:04:15.761393 systemd[1]: Created slice kubepods-burstable-podd5a3678b1833d2d35b25e0988b228e54.slice - libcontainer container kubepods-burstable-podd5a3678b1833d2d35b25e0988b228e54.slice. May 27 17:04:15.763569 kubelet[2987]: E0527 17:04:15.763543 2987 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-910621710e\" not found" node="ci-4344.0.0-a-910621710e" May 27 17:04:15.809490 systemd[1]: Created slice kubepods-burstable-pod72e139d55cbdd6259a220fa46b535ed1.slice - libcontainer container kubepods-burstable-pod72e139d55cbdd6259a220fa46b535ed1.slice. May 27 17:04:15.811378 kubelet[2987]: E0527 17:04:15.811344 2987 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-910621710e\" not found" node="ci-4344.0.0-a-910621710e" May 27 17:04:15.824655 kubelet[2987]: I0527 17:04:15.824612 2987 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d5a3678b1833d2d35b25e0988b228e54-k8s-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-910621710e\" (UID: \"d5a3678b1833d2d35b25e0988b228e54\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-910621710e" May 27 17:04:15.824655 kubelet[2987]: I0527 17:04:15.824655 2987 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5a3678b1833d2d35b25e0988b228e54-kubeconfig\") pod \"kube-controller-manager-ci-4344.0.0-a-910621710e\" (UID: \"d5a3678b1833d2d35b25e0988b228e54\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-910621710e" May 27 17:04:15.824655 kubelet[2987]: I0527 17:04:15.824668 2987 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72e139d55cbdd6259a220fa46b535ed1-kubeconfig\") pod \"kube-scheduler-ci-4344.0.0-a-910621710e\" (UID: \"72e139d55cbdd6259a220fa46b535ed1\") " pod="kube-system/kube-scheduler-ci-4344.0.0-a-910621710e" May 27 17:04:15.824655 kubelet[2987]: I0527 17:04:15.824684 2987 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa928be06c3ff701c3869f58fa8a2324-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.0.0-a-910621710e\" (UID: \"aa928be06c3ff701c3869f58fa8a2324\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-910621710e" May 27 17:04:15.824655 kubelet[2987]: I0527 17:04:15.824694 2987 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d5a3678b1833d2d35b25e0988b228e54-ca-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-910621710e\" (UID: \"d5a3678b1833d2d35b25e0988b228e54\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-910621710e" May 27 17:04:15.824950 kubelet[2987]: I0527 17:04:15.824705 2987 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d5a3678b1833d2d35b25e0988b228e54-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.0.0-a-910621710e\" (UID: \"d5a3678b1833d2d35b25e0988b228e54\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-910621710e" May 27 17:04:15.824950 kubelet[2987]: I0527 17:04:15.824727 2987 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5a3678b1833d2d35b25e0988b228e54-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.0.0-a-910621710e\" (UID: \"d5a3678b1833d2d35b25e0988b228e54\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-910621710e" May 27 17:04:15.824950 kubelet[2987]: I0527 17:04:15.824737 2987 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa928be06c3ff701c3869f58fa8a2324-ca-certs\") pod \"kube-apiserver-ci-4344.0.0-a-910621710e\" (UID: \"aa928be06c3ff701c3869f58fa8a2324\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-910621710e" May 27 17:04:15.824950 kubelet[2987]: I0527 17:04:15.824748 2987 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa928be06c3ff701c3869f58fa8a2324-k8s-certs\") pod \"kube-apiserver-ci-4344.0.0-a-910621710e\" (UID: \"aa928be06c3ff701c3869f58fa8a2324\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-910621710e" May 27 17:04:15.889929 kubelet[2987]: I0527 17:04:15.889611 2987 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-910621710e" May 27 17:04:15.890284 kubelet[2987]: E0527 17:04:15.890255 2987 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4344.0.0-a-910621710e" May 27 17:04:16.057357 containerd[1871]: time="2025-05-27T17:04:16.057209848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.0.0-a-910621710e,Uid:aa928be06c3ff701c3869f58fa8a2324,Namespace:kube-system,Attempt:0,}" May 27 17:04:16.065098 containerd[1871]: time="2025-05-27T17:04:16.065049959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.0.0-a-910621710e,Uid:d5a3678b1833d2d35b25e0988b228e54,Namespace:kube-system,Attempt:0,}" May 27 17:04:16.118643 containerd[1871]: time="2025-05-27T17:04:16.118593595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.0.0-a-910621710e,Uid:72e139d55cbdd6259a220fa46b535ed1,Namespace:kube-system,Attempt:0,}" May 27 17:04:16.127773 kubelet[2987]: E0527 17:04:16.127728 2987 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-a-910621710e?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="800ms" May 27 17:04:16.277677 containerd[1871]: time="2025-05-27T17:04:16.277629280Z" level=info msg="connecting to shim 8b645de021b70d4ccb2ff45d0306f1e6e23d209f475a2dce8a4e63085d777257" address="unix:///run/containerd/s/4d873b1883058a14b09cce3addd91353998a959d13674a694e46897d1c75c912" namespace=k8s.io protocol=ttrpc version=3 May 27 17:04:16.292673 kubelet[2987]: I0527 17:04:16.292628 2987 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-910621710e" May 27 17:04:16.293053 kubelet[2987]: E0527 17:04:16.292980 2987 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4344.0.0-a-910621710e" May 27 17:04:16.297186 systemd[1]: Started cri-containerd-8b645de021b70d4ccb2ff45d0306f1e6e23d209f475a2dce8a4e63085d777257.scope - libcontainer container 8b645de021b70d4ccb2ff45d0306f1e6e23d209f475a2dce8a4e63085d777257. May 27 17:04:16.308210 containerd[1871]: time="2025-05-27T17:04:16.307114927Z" level=info msg="connecting to shim c96f153e5184e61a2fe0ef183853537f00a69e1bd7bbe40d97b8ab39de7b811e" address="unix:///run/containerd/s/768253f2979e123600014456dcfb8dfa81a0dc230a3800667354f9a408d9c3c2" namespace=k8s.io protocol=ttrpc version=3 May 27 17:04:16.322776 containerd[1871]: time="2025-05-27T17:04:16.322734565Z" level=info msg="connecting to shim 95552a9b98f2233542706ed5d4f4b1d6d8e467bdbb9dd9ea71d28ad1da2bdafb" address="unix:///run/containerd/s/95758d369acc4d479f3ea56bfa9194da28369810f3986e91d5e2ef5cea790694" namespace=k8s.io protocol=ttrpc version=3 May 27 17:04:16.338222 systemd[1]: Started cri-containerd-c96f153e5184e61a2fe0ef183853537f00a69e1bd7bbe40d97b8ab39de7b811e.scope - libcontainer container c96f153e5184e61a2fe0ef183853537f00a69e1bd7bbe40d97b8ab39de7b811e. May 27 17:04:16.347884 systemd[1]: Started cri-containerd-95552a9b98f2233542706ed5d4f4b1d6d8e467bdbb9dd9ea71d28ad1da2bdafb.scope - libcontainer container 95552a9b98f2233542706ed5d4f4b1d6d8e467bdbb9dd9ea71d28ad1da2bdafb. May 27 17:04:16.386868 containerd[1871]: time="2025-05-27T17:04:16.386696769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.0.0-a-910621710e,Uid:aa928be06c3ff701c3869f58fa8a2324,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b645de021b70d4ccb2ff45d0306f1e6e23d209f475a2dce8a4e63085d777257\"" May 27 17:04:16.394455 containerd[1871]: time="2025-05-27T17:04:16.394371601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.0.0-a-910621710e,Uid:d5a3678b1833d2d35b25e0988b228e54,Namespace:kube-system,Attempt:0,} returns sandbox id \"c96f153e5184e61a2fe0ef183853537f00a69e1bd7bbe40d97b8ab39de7b811e\"" May 27 17:04:16.399046 containerd[1871]: time="2025-05-27T17:04:16.398226064Z" level=info msg="CreateContainer within sandbox \"8b645de021b70d4ccb2ff45d0306f1e6e23d209f475a2dce8a4e63085d777257\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 17:04:16.403010 containerd[1871]: time="2025-05-27T17:04:16.402960696Z" level=info msg="CreateContainer within sandbox \"c96f153e5184e61a2fe0ef183853537f00a69e1bd7bbe40d97b8ab39de7b811e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 17:04:16.424711 containerd[1871]: time="2025-05-27T17:04:16.424671766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.0.0-a-910621710e,Uid:72e139d55cbdd6259a220fa46b535ed1,Namespace:kube-system,Attempt:0,} returns sandbox id \"95552a9b98f2233542706ed5d4f4b1d6d8e467bdbb9dd9ea71d28ad1da2bdafb\"" May 27 17:04:16.432476 containerd[1871]: time="2025-05-27T17:04:16.432387982Z" level=info msg="CreateContainer within sandbox \"95552a9b98f2233542706ed5d4f4b1d6d8e467bdbb9dd9ea71d28ad1da2bdafb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 17:04:16.438317 containerd[1871]: time="2025-05-27T17:04:16.438275486Z" level=info msg="Container faa0c32dfa27aff58e09c581e7f2275810533e93a4551860c6bc9746bec19fa1: CDI devices from CRI Config.CDIDevices: []" May 27 17:04:16.490192 containerd[1871]: time="2025-05-27T17:04:16.489627114Z" level=info msg="Container 70c7027a484b063bccc4e9b5e6bcd13ec238158a82ccef306a6a1a552a56acde: CDI devices from CRI Config.CDIDevices: []" May 27 17:04:16.607448 kubelet[2987]: E0527 17:04:16.607406 2987 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 17:04:16.754314 kubelet[2987]: E0527 17:04:16.754268 2987 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 17:04:16.826769 containerd[1871]: time="2025-05-27T17:04:16.826607787Z" level=info msg="CreateContainer within sandbox \"c96f153e5184e61a2fe0ef183853537f00a69e1bd7bbe40d97b8ab39de7b811e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"70c7027a484b063bccc4e9b5e6bcd13ec238158a82ccef306a6a1a552a56acde\"" May 27 17:04:16.827815 containerd[1871]: time="2025-05-27T17:04:16.827772499Z" level=info msg="StartContainer for \"70c7027a484b063bccc4e9b5e6bcd13ec238158a82ccef306a6a1a552a56acde\"" May 27 17:04:16.829039 containerd[1871]: time="2025-05-27T17:04:16.828984267Z" level=info msg="connecting to shim 70c7027a484b063bccc4e9b5e6bcd13ec238158a82ccef306a6a1a552a56acde" address="unix:///run/containerd/s/768253f2979e123600014456dcfb8dfa81a0dc230a3800667354f9a408d9c3c2" protocol=ttrpc version=3 May 27 17:04:16.847195 systemd[1]: Started cri-containerd-70c7027a484b063bccc4e9b5e6bcd13ec238158a82ccef306a6a1a552a56acde.scope - libcontainer container 70c7027a484b063bccc4e9b5e6bcd13ec238158a82ccef306a6a1a552a56acde. May 27 17:04:16.877642 containerd[1871]: time="2025-05-27T17:04:16.877506319Z" level=info msg="CreateContainer within sandbox \"8b645de021b70d4ccb2ff45d0306f1e6e23d209f475a2dce8a4e63085d777257\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"faa0c32dfa27aff58e09c581e7f2275810533e93a4551860c6bc9746bec19fa1\"" May 27 17:04:16.878713 containerd[1871]: time="2025-05-27T17:04:16.878659447Z" level=info msg="StartContainer for \"faa0c32dfa27aff58e09c581e7f2275810533e93a4551860c6bc9746bec19fa1\"" May 27 17:04:16.880775 containerd[1871]: time="2025-05-27T17:04:16.880287015Z" level=info msg="Container 7017f348157d9b0caa0f7d64d9f36d3a874c85100158e3937c4a1661be02794d: CDI devices from CRI Config.CDIDevices: []" May 27 17:04:16.882241 containerd[1871]: time="2025-05-27T17:04:16.882194711Z" level=info msg="connecting to shim faa0c32dfa27aff58e09c581e7f2275810533e93a4551860c6bc9746bec19fa1" address="unix:///run/containerd/s/4d873b1883058a14b09cce3addd91353998a959d13674a694e46897d1c75c912" protocol=ttrpc version=3 May 27 17:04:16.928978 kubelet[2987]: E0527 17:04:16.928942 2987 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-a-910621710e?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="1.6s" May 27 17:04:16.930454 containerd[1871]: time="2025-05-27T17:04:16.930423380Z" level=info msg="StartContainer for \"70c7027a484b063bccc4e9b5e6bcd13ec238158a82ccef306a6a1a552a56acde\" returns successfully" May 27 17:04:16.941195 systemd[1]: Started cri-containerd-faa0c32dfa27aff58e09c581e7f2275810533e93a4551860c6bc9746bec19fa1.scope - libcontainer container faa0c32dfa27aff58e09c581e7f2275810533e93a4551860c6bc9746bec19fa1. May 27 17:04:17.019068 containerd[1871]: time="2025-05-27T17:04:17.019032934Z" level=info msg="StartContainer for \"faa0c32dfa27aff58e09c581e7f2275810533e93a4551860c6bc9746bec19fa1\" returns successfully" May 27 17:04:17.082159 containerd[1871]: time="2025-05-27T17:04:17.082106977Z" level=info msg="CreateContainer within sandbox \"95552a9b98f2233542706ed5d4f4b1d6d8e467bdbb9dd9ea71d28ad1da2bdafb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7017f348157d9b0caa0f7d64d9f36d3a874c85100158e3937c4a1661be02794d\"" May 27 17:04:17.083199 containerd[1871]: time="2025-05-27T17:04:17.083167689Z" level=info msg="StartContainer for \"7017f348157d9b0caa0f7d64d9f36d3a874c85100158e3937c4a1661be02794d\"" May 27 17:04:17.084121 containerd[1871]: time="2025-05-27T17:04:17.084095201Z" level=info msg="connecting to shim 7017f348157d9b0caa0f7d64d9f36d3a874c85100158e3937c4a1661be02794d" address="unix:///run/containerd/s/95758d369acc4d479f3ea56bfa9194da28369810f3986e91d5e2ef5cea790694" protocol=ttrpc version=3 May 27 17:04:17.096591 kubelet[2987]: I0527 17:04:17.096563 2987 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-910621710e" May 27 17:04:17.116164 systemd[1]: Started cri-containerd-7017f348157d9b0caa0f7d64d9f36d3a874c85100158e3937c4a1661be02794d.scope - libcontainer container 7017f348157d9b0caa0f7d64d9f36d3a874c85100158e3937c4a1661be02794d. May 27 17:04:17.207250 containerd[1871]: time="2025-05-27T17:04:17.207115201Z" level=info msg="StartContainer for \"7017f348157d9b0caa0f7d64d9f36d3a874c85100158e3937c4a1661be02794d\" returns successfully" May 27 17:04:17.646964 kubelet[2987]: E0527 17:04:17.646900 2987 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-910621710e\" not found" node="ci-4344.0.0-a-910621710e" May 27 17:04:17.650807 kubelet[2987]: E0527 17:04:17.650735 2987 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-910621710e\" not found" node="ci-4344.0.0-a-910621710e" May 27 17:04:17.653569 kubelet[2987]: E0527 17:04:17.653407 2987 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-910621710e\" not found" node="ci-4344.0.0-a-910621710e" May 27 17:04:18.647360 kubelet[2987]: E0527 17:04:18.647312 2987 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.0.0-a-910621710e\" not found" node="ci-4344.0.0-a-910621710e" May 27 17:04:18.656465 kubelet[2987]: E0527 17:04:18.656431 2987 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-910621710e\" not found" node="ci-4344.0.0-a-910621710e" May 27 17:04:18.656804 kubelet[2987]: E0527 17:04:18.656781 2987 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-910621710e\" not found" node="ci-4344.0.0-a-910621710e" May 27 17:04:18.657935 kubelet[2987]: E0527 17:04:18.657896 2987 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-910621710e\" not found" node="ci-4344.0.0-a-910621710e" May 27 17:04:18.752244 kubelet[2987]: I0527 17:04:18.751933 2987 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.0.0-a-910621710e" May 27 17:04:18.753149 kubelet[2987]: E0527 17:04:18.753110 2987 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4344.0.0-a-910621710e\": node \"ci-4344.0.0-a-910621710e\" not found" May 27 17:04:18.825855 kubelet[2987]: I0527 17:04:18.825814 2987 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-a-910621710e" May 27 17:04:18.905795 kubelet[2987]: E0527 17:04:18.905245 2987 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.0.0-a-910621710e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.0.0-a-910621710e" May 27 17:04:18.905795 kubelet[2987]: I0527 17:04:18.905284 2987 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-a-910621710e" May 27 17:04:18.908295 kubelet[2987]: E0527 17:04:18.908255 2987 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.0.0-a-910621710e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.0.0-a-910621710e" May 27 17:04:18.908752 kubelet[2987]: I0527 17:04:18.908588 2987 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-910621710e" May 27 17:04:18.910699 kubelet[2987]: E0527 17:04:18.910667 2987 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.0.0-a-910621710e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-910621710e" May 27 17:04:19.510452 kubelet[2987]: I0527 17:04:19.510407 2987 apiserver.go:52] "Watching apiserver" May 27 17:04:19.523556 kubelet[2987]: I0527 17:04:19.523494 2987 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:04:19.655535 kubelet[2987]: I0527 17:04:19.655504 2987 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-a-910621710e" May 27 17:04:19.657458 kubelet[2987]: I0527 17:04:19.657255 2987 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-a-910621710e" May 27 17:04:19.699393 kubelet[2987]: I0527 17:04:19.699302 2987 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 17:04:19.702343 kubelet[2987]: I0527 17:04:19.702304 2987 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 17:04:20.180330 kubelet[2987]: I0527 17:04:20.180296 2987 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-910621710e" May 27 17:04:20.189215 kubelet[2987]: I0527 17:04:20.189174 2987 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 17:04:21.487514 systemd[1]: Reload requested from client PID 3271 ('systemctl') (unit session-9.scope)... May 27 17:04:21.487532 systemd[1]: Reloading... May 27 17:04:21.596086 zram_generator::config[3320]: No configuration found. May 27 17:04:21.679500 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:04:21.776900 systemd[1]: Reloading finished in 289 ms. May 27 17:04:21.804162 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:04:21.828236 systemd[1]: kubelet.service: Deactivated successfully. May 27 17:04:21.828654 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:04:21.828725 systemd[1]: kubelet.service: Consumed 527ms CPU time, 127.9M memory peak. May 27 17:04:21.830908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:04:21.966279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:04:21.976432 (kubelet)[3382]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:04:22.011774 kubelet[3382]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:04:22.011774 kubelet[3382]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:04:22.011774 kubelet[3382]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:04:22.011774 kubelet[3382]: I0527 17:04:22.011767 3382 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:04:22.016310 kubelet[3382]: I0527 17:04:22.016272 3382 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 17:04:22.016310 kubelet[3382]: I0527 17:04:22.016303 3382 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:04:22.016525 kubelet[3382]: I0527 17:04:22.016507 3382 server.go:956] "Client rotation is on, will bootstrap in background" May 27 17:04:22.017498 kubelet[3382]: I0527 17:04:22.017477 3382 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 27 17:04:22.074427 kubelet[3382]: I0527 17:04:22.074295 3382 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:04:22.084529 kubelet[3382]: I0527 17:04:22.083100 3382 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:04:22.086703 kubelet[3382]: I0527 17:04:22.086668 3382 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:04:22.087098 kubelet[3382]: I0527 17:04:22.087064 3382 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:04:22.087316 kubelet[3382]: I0527 17:04:22.087179 3382 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.0.0-a-910621710e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:04:22.087456 kubelet[3382]: I0527 17:04:22.087442 3382 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:04:22.087502 kubelet[3382]: I0527 17:04:22.087494 3382 container_manager_linux.go:303] "Creating device plugin manager" May 27 17:04:22.087585 kubelet[3382]: I0527 17:04:22.087576 3382 state_mem.go:36] "Initialized new in-memory state store" May 27 17:04:22.087796 kubelet[3382]: I0527 17:04:22.087780 3382 kubelet.go:480] "Attempting to sync node with API server" May 27 17:04:22.087859 kubelet[3382]: I0527 17:04:22.087851 3382 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:04:22.087916 kubelet[3382]: I0527 17:04:22.087908 3382 kubelet.go:386] "Adding apiserver pod source" May 27 17:04:22.087977 kubelet[3382]: I0527 17:04:22.087968 3382 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:04:22.091968 kubelet[3382]: I0527 17:04:22.091942 3382 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:04:22.092572 kubelet[3382]: I0527 17:04:22.092548 3382 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 17:04:22.095373 kubelet[3382]: I0527 17:04:22.095193 3382 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:04:22.095787 kubelet[3382]: I0527 17:04:22.095696 3382 server.go:1289] "Started kubelet" May 27 17:04:22.099398 kubelet[3382]: I0527 17:04:22.097675 3382 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:04:22.110513 kubelet[3382]: I0527 17:04:22.110453 3382 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:04:22.111215 kubelet[3382]: I0527 17:04:22.111191 3382 server.go:317] "Adding debug handlers to kubelet server" May 27 17:04:22.116588 kubelet[3382]: I0527 17:04:22.116488 3382 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:04:22.116835 kubelet[3382]: I0527 17:04:22.116743 3382 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:04:22.117554 kubelet[3382]: I0527 17:04:22.116970 3382 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:04:22.117889 kubelet[3382]: I0527 17:04:22.117784 3382 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:04:22.118706 kubelet[3382]: E0527 17:04:22.118348 3382 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.0.0-a-910621710e\" not found" May 27 17:04:22.119554 kubelet[3382]: I0527 17:04:22.119528 3382 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:04:22.119889 kubelet[3382]: I0527 17:04:22.119649 3382 reconciler.go:26] "Reconciler: start to sync state" May 27 17:04:22.127806 kubelet[3382]: I0527 17:04:22.127671 3382 factory.go:223] Registration of the systemd container factory successfully May 27 17:04:22.127806 kubelet[3382]: I0527 17:04:22.127802 3382 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:04:22.132353 kubelet[3382]: I0527 17:04:22.132322 3382 factory.go:223] Registration of the containerd container factory successfully May 27 17:04:22.133468 kubelet[3382]: E0527 17:04:22.133431 3382 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:04:22.152916 kubelet[3382]: I0527 17:04:22.152829 3382 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 17:04:22.158055 kubelet[3382]: I0527 17:04:22.157958 3382 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 17:04:22.158521 kubelet[3382]: I0527 17:04:22.158210 3382 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 17:04:22.158521 kubelet[3382]: I0527 17:04:22.158241 3382 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:04:22.158521 kubelet[3382]: I0527 17:04:22.158249 3382 kubelet.go:2436] "Starting kubelet main sync loop" May 27 17:04:22.158521 kubelet[3382]: E0527 17:04:22.158302 3382 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:04:22.186918 kubelet[3382]: I0527 17:04:22.186882 3382 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:04:22.186918 kubelet[3382]: I0527 17:04:22.186901 3382 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:04:22.186918 kubelet[3382]: I0527 17:04:22.186931 3382 state_mem.go:36] "Initialized new in-memory state store" May 27 17:04:22.187202 kubelet[3382]: I0527 17:04:22.187183 3382 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 17:04:22.187224 kubelet[3382]: I0527 17:04:22.187199 3382 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 17:04:22.187224 kubelet[3382]: I0527 17:04:22.187222 3382 policy_none.go:49] "None policy: Start" May 27 17:04:22.187262 kubelet[3382]: I0527 17:04:22.187231 3382 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:04:22.187262 kubelet[3382]: I0527 17:04:22.187239 3382 state_mem.go:35] "Initializing new in-memory state store" May 27 17:04:22.187338 kubelet[3382]: I0527 17:04:22.187327 3382 state_mem.go:75] "Updated machine memory state" May 27 17:04:22.191708 kubelet[3382]: E0527 17:04:22.191517 3382 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 17:04:22.191708 kubelet[3382]: I0527 17:04:22.191709 3382 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:04:22.191856 kubelet[3382]: I0527 17:04:22.191720 3382 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:04:22.195102 kubelet[3382]: I0527 17:04:22.195070 3382 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:04:22.195949 kubelet[3382]: E0527 17:04:22.195849 3382 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:04:22.259531 kubelet[3382]: I0527 17:04:22.259422 3382 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-a-910621710e" May 27 17:04:22.424898 kubelet[3382]: I0527 17:04:22.259712 3382 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-a-910621710e" May 27 17:04:22.424898 kubelet[3382]: I0527 17:04:22.259422 3382 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-910621710e" May 27 17:04:22.424898 kubelet[3382]: I0527 17:04:22.271785 3382 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 17:04:22.424898 kubelet[3382]: I0527 17:04:22.272533 3382 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 17:04:22.424898 kubelet[3382]: E0527 17:04:22.272579 3382 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.0.0-a-910621710e\" already exists" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-910621710e" May 27 17:04:22.424898 kubelet[3382]: E0527 17:04:22.272581 3382 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.0.0-a-910621710e\" already exists" pod="kube-system/kube-scheduler-ci-4344.0.0-a-910621710e" May 27 17:04:22.424898 kubelet[3382]: I0527 17:04:22.272794 3382 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 17:04:22.424898 kubelet[3382]: E0527 17:04:22.272827 3382 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.0.0-a-910621710e\" already exists" pod="kube-system/kube-apiserver-ci-4344.0.0-a-910621710e" May 27 17:04:22.424898 kubelet[3382]: I0527 17:04:22.295163 3382 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-910621710e" May 27 17:04:22.424898 kubelet[3382]: I0527 17:04:22.306951 3382 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.0.0-a-910621710e" May 27 17:04:22.425138 kubelet[3382]: I0527 17:04:22.320560 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa928be06c3ff701c3869f58fa8a2324-ca-certs\") pod \"kube-apiserver-ci-4344.0.0-a-910621710e\" (UID: \"aa928be06c3ff701c3869f58fa8a2324\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-910621710e" May 27 17:04:22.425138 kubelet[3382]: I0527 17:04:22.320598 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d5a3678b1833d2d35b25e0988b228e54-ca-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-910621710e\" (UID: \"d5a3678b1833d2d35b25e0988b228e54\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-910621710e" May 27 17:04:22.425138 kubelet[3382]: I0527 17:04:22.320609 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d5a3678b1833d2d35b25e0988b228e54-k8s-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-910621710e\" (UID: \"d5a3678b1833d2d35b25e0988b228e54\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-910621710e" May 27 17:04:22.425138 kubelet[3382]: I0527 17:04:22.320626 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5a3678b1833d2d35b25e0988b228e54-kubeconfig\") pod \"kube-controller-manager-ci-4344.0.0-a-910621710e\" (UID: \"d5a3678b1833d2d35b25e0988b228e54\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-910621710e" May 27 17:04:22.425138 kubelet[3382]: I0527 17:04:22.320637 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5a3678b1833d2d35b25e0988b228e54-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.0.0-a-910621710e\" (UID: \"d5a3678b1833d2d35b25e0988b228e54\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-910621710e" May 27 17:04:22.425214 kubelet[3382]: I0527 17:04:22.320648 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72e139d55cbdd6259a220fa46b535ed1-kubeconfig\") pod \"kube-scheduler-ci-4344.0.0-a-910621710e\" (UID: \"72e139d55cbdd6259a220fa46b535ed1\") " pod="kube-system/kube-scheduler-ci-4344.0.0-a-910621710e" May 27 17:04:22.425214 kubelet[3382]: I0527 17:04:22.320659 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa928be06c3ff701c3869f58fa8a2324-k8s-certs\") pod \"kube-apiserver-ci-4344.0.0-a-910621710e\" (UID: \"aa928be06c3ff701c3869f58fa8a2324\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-910621710e" May 27 17:04:22.425214 kubelet[3382]: I0527 17:04:22.320668 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa928be06c3ff701c3869f58fa8a2324-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.0.0-a-910621710e\" (UID: \"aa928be06c3ff701c3869f58fa8a2324\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-910621710e" May 27 17:04:22.425214 kubelet[3382]: I0527 17:04:22.320678 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d5a3678b1833d2d35b25e0988b228e54-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.0.0-a-910621710e\" (UID: \"d5a3678b1833d2d35b25e0988b228e54\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-910621710e" May 27 17:04:22.427493 kubelet[3382]: I0527 17:04:22.425858 3382 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.0.0-a-910621710e" May 27 17:04:23.089587 kubelet[3382]: I0527 17:04:23.089287 3382 apiserver.go:52] "Watching apiserver" May 27 17:04:23.120635 kubelet[3382]: I0527 17:04:23.120558 3382 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:04:23.175230 kubelet[3382]: I0527 17:04:23.175194 3382 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-a-910621710e" May 27 17:04:23.176092 kubelet[3382]: I0527 17:04:23.176028 3382 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-a-910621710e" May 27 17:04:23.184689 kubelet[3382]: I0527 17:04:23.184645 3382 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 17:04:23.185140 kubelet[3382]: E0527 17:04:23.184712 3382 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.0.0-a-910621710e\" already exists" pod="kube-system/kube-scheduler-ci-4344.0.0-a-910621710e" May 27 17:04:23.187102 kubelet[3382]: I0527 17:04:23.186741 3382 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 17:04:23.187102 kubelet[3382]: E0527 17:04:23.186805 3382 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.0.0-a-910621710e\" already exists" pod="kube-system/kube-apiserver-ci-4344.0.0-a-910621710e" May 27 17:04:23.200402 kubelet[3382]: I0527 17:04:23.200341 3382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.0.0-a-910621710e" podStartSLOduration=4.200324818 podStartE2EDuration="4.200324818s" podCreationTimestamp="2025-05-27 17:04:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:04:23.200025946 +0000 UTC m=+1.219540183" watchObservedRunningTime="2025-05-27 17:04:23.200324818 +0000 UTC m=+1.219839063" May 27 17:04:23.222725 kubelet[3382]: I0527 17:04:23.222620 3382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-910621710e" podStartSLOduration=3.222602571 podStartE2EDuration="3.222602571s" podCreationTimestamp="2025-05-27 17:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:04:23.212299746 +0000 UTC m=+1.231814023" watchObservedRunningTime="2025-05-27 17:04:23.222602571 +0000 UTC m=+1.242116880" May 27 17:04:25.638385 kubelet[3382]: I0527 17:04:25.638345 3382 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 17:04:25.638805 containerd[1871]: time="2025-05-27T17:04:25.638719168Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 17:04:25.639064 kubelet[3382]: I0527 17:04:25.638916 3382 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 17:04:26.367363 kubelet[3382]: I0527 17:04:26.367300 3382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.0.0-a-910621710e" podStartSLOduration=7.367281191 podStartE2EDuration="7.367281191s" podCreationTimestamp="2025-05-27 17:04:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:04:23.223724011 +0000 UTC m=+1.243238248" watchObservedRunningTime="2025-05-27 17:04:26.367281191 +0000 UTC m=+4.386795428" May 27 17:04:26.384244 systemd[1]: Created slice kubepods-besteffort-podb19b2aec_3bf7_47dd_b87d_34e0a17f08ab.slice - libcontainer container kubepods-besteffort-podb19b2aec_3bf7_47dd_b87d_34e0a17f08ab.slice. May 27 17:04:26.445776 kubelet[3382]: I0527 17:04:26.445667 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hhzr\" (UniqueName: \"kubernetes.io/projected/b19b2aec-3bf7-47dd-b87d-34e0a17f08ab-kube-api-access-2hhzr\") pod \"kube-proxy-nxhpb\" (UID: \"b19b2aec-3bf7-47dd-b87d-34e0a17f08ab\") " pod="kube-system/kube-proxy-nxhpb" May 27 17:04:26.445776 kubelet[3382]: I0527 17:04:26.445737 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b19b2aec-3bf7-47dd-b87d-34e0a17f08ab-xtables-lock\") pod \"kube-proxy-nxhpb\" (UID: \"b19b2aec-3bf7-47dd-b87d-34e0a17f08ab\") " pod="kube-system/kube-proxy-nxhpb" May 27 17:04:26.445776 kubelet[3382]: I0527 17:04:26.445753 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b19b2aec-3bf7-47dd-b87d-34e0a17f08ab-lib-modules\") pod \"kube-proxy-nxhpb\" (UID: \"b19b2aec-3bf7-47dd-b87d-34e0a17f08ab\") " pod="kube-system/kube-proxy-nxhpb" May 27 17:04:26.446110 kubelet[3382]: I0527 17:04:26.446038 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b19b2aec-3bf7-47dd-b87d-34e0a17f08ab-kube-proxy\") pod \"kube-proxy-nxhpb\" (UID: \"b19b2aec-3bf7-47dd-b87d-34e0a17f08ab\") " pod="kube-system/kube-proxy-nxhpb" May 27 17:04:26.656592 systemd[1]: Created slice kubepods-besteffort-pod6a8392a8_46f2_4a43_bbf5_1de32d07b585.slice - libcontainer container kubepods-besteffort-pod6a8392a8_46f2_4a43_bbf5_1de32d07b585.slice. May 27 17:04:26.697379 containerd[1871]: time="2025-05-27T17:04:26.697338801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nxhpb,Uid:b19b2aec-3bf7-47dd-b87d-34e0a17f08ab,Namespace:kube-system,Attempt:0,}" May 27 17:04:26.747751 kubelet[3382]: I0527 17:04:26.747706 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6a8392a8-46f2-4a43-bbf5-1de32d07b585-var-lib-calico\") pod \"tigera-operator-844669ff44-r7rs9\" (UID: \"6a8392a8-46f2-4a43-bbf5-1de32d07b585\") " pod="tigera-operator/tigera-operator-844669ff44-r7rs9" May 27 17:04:26.747751 kubelet[3382]: I0527 17:04:26.747754 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndvm5\" (UniqueName: \"kubernetes.io/projected/6a8392a8-46f2-4a43-bbf5-1de32d07b585-kube-api-access-ndvm5\") pod \"tigera-operator-844669ff44-r7rs9\" (UID: \"6a8392a8-46f2-4a43-bbf5-1de32d07b585\") " pod="tigera-operator/tigera-operator-844669ff44-r7rs9" May 27 17:04:26.757830 containerd[1871]: time="2025-05-27T17:04:26.757727899Z" level=info msg="connecting to shim d50cc5304afed03c484fad22fc45e6782bf272005ed762c02bfea048b19911b3" address="unix:///run/containerd/s/135190442ec767e60a4aa059c5c89a8643d242d856c8bc2c12314f6048e0021f" namespace=k8s.io protocol=ttrpc version=3 May 27 17:04:26.779196 systemd[1]: Started cri-containerd-d50cc5304afed03c484fad22fc45e6782bf272005ed762c02bfea048b19911b3.scope - libcontainer container d50cc5304afed03c484fad22fc45e6782bf272005ed762c02bfea048b19911b3. May 27 17:04:26.804557 containerd[1871]: time="2025-05-27T17:04:26.804502477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nxhpb,Uid:b19b2aec-3bf7-47dd-b87d-34e0a17f08ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"d50cc5304afed03c484fad22fc45e6782bf272005ed762c02bfea048b19911b3\"" May 27 17:04:26.812241 containerd[1871]: time="2025-05-27T17:04:26.812200685Z" level=info msg="CreateContainer within sandbox \"d50cc5304afed03c484fad22fc45e6782bf272005ed762c02bfea048b19911b3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 17:04:26.863044 containerd[1871]: time="2025-05-27T17:04:26.861151166Z" level=info msg="Container 5316aa7f7dfb9fae3257decf6b0125b6157f906d92f42945e07cef083a863ade: CDI devices from CRI Config.CDIDevices: []" May 27 17:04:26.867404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1595587601.mount: Deactivated successfully. May 27 17:04:26.882405 containerd[1871]: time="2025-05-27T17:04:26.882338175Z" level=info msg="CreateContainer within sandbox \"d50cc5304afed03c484fad22fc45e6782bf272005ed762c02bfea048b19911b3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5316aa7f7dfb9fae3257decf6b0125b6157f906d92f42945e07cef083a863ade\"" May 27 17:04:26.884050 containerd[1871]: time="2025-05-27T17:04:26.883915159Z" level=info msg="StartContainer for \"5316aa7f7dfb9fae3257decf6b0125b6157f906d92f42945e07cef083a863ade\"" May 27 17:04:26.887094 containerd[1871]: time="2025-05-27T17:04:26.887031359Z" level=info msg="connecting to shim 5316aa7f7dfb9fae3257decf6b0125b6157f906d92f42945e07cef083a863ade" address="unix:///run/containerd/s/135190442ec767e60a4aa059c5c89a8643d242d856c8bc2c12314f6048e0021f" protocol=ttrpc version=3 May 27 17:04:26.903183 systemd[1]: Started cri-containerd-5316aa7f7dfb9fae3257decf6b0125b6157f906d92f42945e07cef083a863ade.scope - libcontainer container 5316aa7f7dfb9fae3257decf6b0125b6157f906d92f42945e07cef083a863ade. May 27 17:04:26.942326 containerd[1871]: time="2025-05-27T17:04:26.941437985Z" level=info msg="StartContainer for \"5316aa7f7dfb9fae3257decf6b0125b6157f906d92f42945e07cef083a863ade\" returns successfully" May 27 17:04:26.963423 containerd[1871]: time="2025-05-27T17:04:26.963126434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-r7rs9,Uid:6a8392a8-46f2-4a43-bbf5-1de32d07b585,Namespace:tigera-operator,Attempt:0,}" May 27 17:04:27.053271 containerd[1871]: time="2025-05-27T17:04:27.053203300Z" level=info msg="connecting to shim d4ecdffe8055b1d567b4ac2d5206d9112c65ba9023f8598b6e22d84fae208489" address="unix:///run/containerd/s/9ddd70cfa400697c020495727be896763fd43d2bf48fac99e1839f7ae3acd2b6" namespace=k8s.io protocol=ttrpc version=3 May 27 17:04:27.082210 systemd[1]: Started cri-containerd-d4ecdffe8055b1d567b4ac2d5206d9112c65ba9023f8598b6e22d84fae208489.scope - libcontainer container d4ecdffe8055b1d567b4ac2d5206d9112c65ba9023f8598b6e22d84fae208489. May 27 17:04:27.124515 containerd[1871]: time="2025-05-27T17:04:27.124464367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-r7rs9,Uid:6a8392a8-46f2-4a43-bbf5-1de32d07b585,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d4ecdffe8055b1d567b4ac2d5206d9112c65ba9023f8598b6e22d84fae208489\"" May 27 17:04:27.128692 containerd[1871]: time="2025-05-27T17:04:27.127856719Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 17:04:27.221721 kubelet[3382]: I0527 17:04:27.221556 3382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nxhpb" podStartSLOduration=1.221536514 podStartE2EDuration="1.221536514s" podCreationTimestamp="2025-05-27 17:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:04:27.203405665 +0000 UTC m=+5.222919926" watchObservedRunningTime="2025-05-27 17:04:27.221536514 +0000 UTC m=+5.241050751" May 27 17:04:28.726407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2656858951.mount: Deactivated successfully. May 27 17:04:29.760064 containerd[1871]: time="2025-05-27T17:04:29.759972692Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:29.762658 containerd[1871]: time="2025-05-27T17:04:29.762609788Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=22143480" May 27 17:04:29.767612 containerd[1871]: time="2025-05-27T17:04:29.767569532Z" level=info msg="ImageCreate event name:\"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:29.775154 containerd[1871]: time="2025-05-27T17:04:29.775092877Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:29.775553 containerd[1871]: time="2025-05-27T17:04:29.775326917Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"22139475\" in 2.646226486s" May 27 17:04:29.775553 containerd[1871]: time="2025-05-27T17:04:29.775356493Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\"" May 27 17:04:29.783901 containerd[1871]: time="2025-05-27T17:04:29.783847045Z" level=info msg="CreateContainer within sandbox \"d4ecdffe8055b1d567b4ac2d5206d9112c65ba9023f8598b6e22d84fae208489\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 17:04:29.823718 containerd[1871]: time="2025-05-27T17:04:29.823658438Z" level=info msg="Container e8f87fd6407c77f3e19f9030a2854f096af846b90114c802f900e302eda39c0f: CDI devices from CRI Config.CDIDevices: []" May 27 17:04:29.824592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount334172461.mount: Deactivated successfully. May 27 17:04:29.844529 containerd[1871]: time="2025-05-27T17:04:29.844430095Z" level=info msg="CreateContainer within sandbox \"d4ecdffe8055b1d567b4ac2d5206d9112c65ba9023f8598b6e22d84fae208489\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e8f87fd6407c77f3e19f9030a2854f096af846b90114c802f900e302eda39c0f\"" May 27 17:04:29.845376 containerd[1871]: time="2025-05-27T17:04:29.845110551Z" level=info msg="StartContainer for \"e8f87fd6407c77f3e19f9030a2854f096af846b90114c802f900e302eda39c0f\"" May 27 17:04:29.845898 containerd[1871]: time="2025-05-27T17:04:29.845872255Z" level=info msg="connecting to shim e8f87fd6407c77f3e19f9030a2854f096af846b90114c802f900e302eda39c0f" address="unix:///run/containerd/s/9ddd70cfa400697c020495727be896763fd43d2bf48fac99e1839f7ae3acd2b6" protocol=ttrpc version=3 May 27 17:04:29.868225 systemd[1]: Started cri-containerd-e8f87fd6407c77f3e19f9030a2854f096af846b90114c802f900e302eda39c0f.scope - libcontainer container e8f87fd6407c77f3e19f9030a2854f096af846b90114c802f900e302eda39c0f. May 27 17:04:29.898410 containerd[1871]: time="2025-05-27T17:04:29.898344585Z" level=info msg="StartContainer for \"e8f87fd6407c77f3e19f9030a2854f096af846b90114c802f900e302eda39c0f\" returns successfully" May 27 17:04:30.207316 kubelet[3382]: I0527 17:04:30.207243 3382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-r7rs9" podStartSLOduration=1.5578406679999999 podStartE2EDuration="4.206961842s" podCreationTimestamp="2025-05-27 17:04:26 +0000 UTC" firstStartedPulling="2025-05-27 17:04:27.126953559 +0000 UTC m=+5.146467796" lastFinishedPulling="2025-05-27 17:04:29.776074733 +0000 UTC m=+7.795588970" observedRunningTime="2025-05-27 17:04:30.206794938 +0000 UTC m=+8.226309183" watchObservedRunningTime="2025-05-27 17:04:30.206961842 +0000 UTC m=+8.226476207" May 27 17:04:35.120849 sudo[2461]: pam_unix(sudo:session): session closed for user root May 27 17:04:35.206112 sshd[2460]: Connection closed by 10.200.16.10 port 54026 May 27 17:04:35.206925 sshd-session[2458]: pam_unix(sshd:session): session closed for user core May 27 17:04:35.211919 systemd-logind[1849]: Session 9 logged out. Waiting for processes to exit. May 27 17:04:35.212331 systemd[1]: sshd@6-10.200.20.14:22-10.200.16.10:54026.service: Deactivated successfully. May 27 17:04:35.215586 systemd[1]: session-9.scope: Deactivated successfully. May 27 17:04:35.215912 systemd[1]: session-9.scope: Consumed 3.536s CPU time, 231.2M memory peak. May 27 17:04:35.220068 systemd-logind[1849]: Removed session 9. May 27 17:04:38.683740 systemd[1]: Created slice kubepods-besteffort-pod990473f5_cd40_4e99_8e98_b839afda97b5.slice - libcontainer container kubepods-besteffort-pod990473f5_cd40_4e99_8e98_b839afda97b5.slice. May 27 17:04:38.731271 kubelet[3382]: I0527 17:04:38.731145 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/990473f5-cd40-4e99-8e98-b839afda97b5-tigera-ca-bundle\") pod \"calico-typha-7f4595556c-dmt7z\" (UID: \"990473f5-cd40-4e99-8e98-b839afda97b5\") " pod="calico-system/calico-typha-7f4595556c-dmt7z" May 27 17:04:38.731271 kubelet[3382]: I0527 17:04:38.731193 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/990473f5-cd40-4e99-8e98-b839afda97b5-typha-certs\") pod \"calico-typha-7f4595556c-dmt7z\" (UID: \"990473f5-cd40-4e99-8e98-b839afda97b5\") " pod="calico-system/calico-typha-7f4595556c-dmt7z" May 27 17:04:38.731271 kubelet[3382]: I0527 17:04:38.731207 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r8qj\" (UniqueName: \"kubernetes.io/projected/990473f5-cd40-4e99-8e98-b839afda97b5-kube-api-access-4r8qj\") pod \"calico-typha-7f4595556c-dmt7z\" (UID: \"990473f5-cd40-4e99-8e98-b839afda97b5\") " pod="calico-system/calico-typha-7f4595556c-dmt7z" May 27 17:04:38.815685 systemd[1]: Created slice kubepods-besteffort-pod87aa832e_a4a6_47b8_a7ee_7abaf7f68657.slice - libcontainer container kubepods-besteffort-pod87aa832e_a4a6_47b8_a7ee_7abaf7f68657.slice. May 27 17:04:38.831929 kubelet[3382]: I0527 17:04:38.831883 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/87aa832e-a4a6-47b8-a7ee-7abaf7f68657-cni-net-dir\") pod \"calico-node-56nqq\" (UID: \"87aa832e-a4a6-47b8-a7ee-7abaf7f68657\") " pod="calico-system/calico-node-56nqq" May 27 17:04:38.831929 kubelet[3382]: I0527 17:04:38.831922 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/87aa832e-a4a6-47b8-a7ee-7abaf7f68657-flexvol-driver-host\") pod \"calico-node-56nqq\" (UID: \"87aa832e-a4a6-47b8-a7ee-7abaf7f68657\") " pod="calico-system/calico-node-56nqq" May 27 17:04:38.831929 kubelet[3382]: I0527 17:04:38.831941 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/87aa832e-a4a6-47b8-a7ee-7abaf7f68657-node-certs\") pod \"calico-node-56nqq\" (UID: \"87aa832e-a4a6-47b8-a7ee-7abaf7f68657\") " pod="calico-system/calico-node-56nqq" May 27 17:04:38.832146 kubelet[3382]: I0527 17:04:38.831952 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87aa832e-a4a6-47b8-a7ee-7abaf7f68657-tigera-ca-bundle\") pod \"calico-node-56nqq\" (UID: \"87aa832e-a4a6-47b8-a7ee-7abaf7f68657\") " pod="calico-system/calico-node-56nqq" May 27 17:04:38.832146 kubelet[3382]: I0527 17:04:38.831974 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87aa832e-a4a6-47b8-a7ee-7abaf7f68657-xtables-lock\") pod \"calico-node-56nqq\" (UID: \"87aa832e-a4a6-47b8-a7ee-7abaf7f68657\") " pod="calico-system/calico-node-56nqq" May 27 17:04:38.832146 kubelet[3382]: I0527 17:04:38.831986 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/87aa832e-a4a6-47b8-a7ee-7abaf7f68657-cni-log-dir\") pod \"calico-node-56nqq\" (UID: \"87aa832e-a4a6-47b8-a7ee-7abaf7f68657\") " pod="calico-system/calico-node-56nqq" May 27 17:04:38.832146 kubelet[3382]: I0527 17:04:38.832002 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87aa832e-a4a6-47b8-a7ee-7abaf7f68657-lib-modules\") pod \"calico-node-56nqq\" (UID: \"87aa832e-a4a6-47b8-a7ee-7abaf7f68657\") " pod="calico-system/calico-node-56nqq" May 27 17:04:38.832146 kubelet[3382]: I0527 17:04:38.832010 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/87aa832e-a4a6-47b8-a7ee-7abaf7f68657-var-run-calico\") pod \"calico-node-56nqq\" (UID: \"87aa832e-a4a6-47b8-a7ee-7abaf7f68657\") " pod="calico-system/calico-node-56nqq" May 27 17:04:38.832223 kubelet[3382]: I0527 17:04:38.832021 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/87aa832e-a4a6-47b8-a7ee-7abaf7f68657-cni-bin-dir\") pod \"calico-node-56nqq\" (UID: \"87aa832e-a4a6-47b8-a7ee-7abaf7f68657\") " pod="calico-system/calico-node-56nqq" May 27 17:04:38.832223 kubelet[3382]: I0527 17:04:38.832040 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/87aa832e-a4a6-47b8-a7ee-7abaf7f68657-policysync\") pod \"calico-node-56nqq\" (UID: \"87aa832e-a4a6-47b8-a7ee-7abaf7f68657\") " pod="calico-system/calico-node-56nqq" May 27 17:04:38.832223 kubelet[3382]: I0527 17:04:38.832058 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxvz4\" (UniqueName: \"kubernetes.io/projected/87aa832e-a4a6-47b8-a7ee-7abaf7f68657-kube-api-access-kxvz4\") pod \"calico-node-56nqq\" (UID: \"87aa832e-a4a6-47b8-a7ee-7abaf7f68657\") " pod="calico-system/calico-node-56nqq" May 27 17:04:38.832223 kubelet[3382]: I0527 17:04:38.832074 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/87aa832e-a4a6-47b8-a7ee-7abaf7f68657-var-lib-calico\") pod \"calico-node-56nqq\" (UID: \"87aa832e-a4a6-47b8-a7ee-7abaf7f68657\") " pod="calico-system/calico-node-56nqq" May 27 17:04:38.931638 kubelet[3382]: E0527 17:04:38.931275 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tmb64" podUID="9cad5d59-f409-4303-a7b2-86efde3e9bab" May 27 17:04:38.945161 kubelet[3382]: E0527 17:04:38.944933 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:38.946232 kubelet[3382]: W0527 17:04:38.945544 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:38.946232 kubelet[3382]: E0527 17:04:38.945592 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:38.954339 kubelet[3382]: E0527 17:04:38.954314 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:38.954590 kubelet[3382]: W0527 17:04:38.954572 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:38.954687 kubelet[3382]: E0527 17:04:38.954647 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:38.987916 containerd[1871]: time="2025-05-27T17:04:38.987873281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f4595556c-dmt7z,Uid:990473f5-cd40-4e99-8e98-b839afda97b5,Namespace:calico-system,Attempt:0,}" May 27 17:04:39.020534 kubelet[3382]: E0527 17:04:39.020444 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.020534 kubelet[3382]: W0527 17:04:39.020468 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.020534 kubelet[3382]: E0527 17:04:39.020489 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.021189 kubelet[3382]: E0527 17:04:39.021071 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.021189 kubelet[3382]: W0527 17:04:39.021093 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.021189 kubelet[3382]: E0527 17:04:39.021139 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.021525 kubelet[3382]: E0527 17:04:39.021444 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.021525 kubelet[3382]: W0527 17:04:39.021456 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.021525 kubelet[3382]: E0527 17:04:39.021468 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.022059 kubelet[3382]: E0527 17:04:39.022042 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.022219 kubelet[3382]: W0527 17:04:39.022138 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.022219 kubelet[3382]: E0527 17:04:39.022155 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.022556 kubelet[3382]: E0527 17:04:39.022495 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.022556 kubelet[3382]: W0527 17:04:39.022507 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.022556 kubelet[3382]: E0527 17:04:39.022516 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.022904 kubelet[3382]: E0527 17:04:39.022848 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.022904 kubelet[3382]: W0527 17:04:39.022863 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.022904 kubelet[3382]: E0527 17:04:39.022872 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.023734 kubelet[3382]: E0527 17:04:39.023349 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.023734 kubelet[3382]: W0527 17:04:39.023362 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.023734 kubelet[3382]: E0527 17:04:39.023371 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.024126 kubelet[3382]: E0527 17:04:39.024113 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.024269 kubelet[3382]: W0527 17:04:39.024196 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.024269 kubelet[3382]: E0527 17:04:39.024212 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.024479 kubelet[3382]: E0527 17:04:39.024467 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.024636 kubelet[3382]: W0527 17:04:39.024541 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.024636 kubelet[3382]: E0527 17:04:39.024556 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.024743 kubelet[3382]: E0527 17:04:39.024734 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.024785 kubelet[3382]: W0527 17:04:39.024778 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.024827 kubelet[3382]: E0527 17:04:39.024817 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.025308 kubelet[3382]: E0527 17:04:39.025245 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.025308 kubelet[3382]: W0527 17:04:39.025259 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.025308 kubelet[3382]: E0527 17:04:39.025269 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.025771 kubelet[3382]: E0527 17:04:39.025605 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.025771 kubelet[3382]: W0527 17:04:39.025616 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.025771 kubelet[3382]: E0527 17:04:39.025626 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.026631 kubelet[3382]: E0527 17:04:39.026563 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.026631 kubelet[3382]: W0527 17:04:39.026578 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.026631 kubelet[3382]: E0527 17:04:39.026588 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.027441 kubelet[3382]: E0527 17:04:39.027375 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.027441 kubelet[3382]: W0527 17:04:39.027391 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.027441 kubelet[3382]: E0527 17:04:39.027401 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.027744 kubelet[3382]: E0527 17:04:39.027730 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.027878 kubelet[3382]: W0527 17:04:39.027811 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.027878 kubelet[3382]: E0527 17:04:39.027828 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.028695 kubelet[3382]: E0527 17:04:39.028574 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.028695 kubelet[3382]: W0527 17:04:39.028587 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.028695 kubelet[3382]: E0527 17:04:39.028596 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.028981 kubelet[3382]: E0527 17:04:39.028971 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.029216 kubelet[3382]: W0527 17:04:39.029083 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.029216 kubelet[3382]: E0527 17:04:39.029099 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.031032 kubelet[3382]: E0527 17:04:39.030288 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.031032 kubelet[3382]: W0527 17:04:39.030320 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.031032 kubelet[3382]: E0527 17:04:39.030332 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.031032 kubelet[3382]: E0527 17:04:39.030903 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.031032 kubelet[3382]: W0527 17:04:39.030913 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.031032 kubelet[3382]: E0527 17:04:39.030922 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.032073 kubelet[3382]: E0527 17:04:39.032056 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.032204 kubelet[3382]: W0527 17:04:39.032155 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.032204 kubelet[3382]: E0527 17:04:39.032170 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.034651 kubelet[3382]: E0527 17:04:39.034630 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.034872 kubelet[3382]: W0527 17:04:39.034770 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.034872 kubelet[3382]: E0527 17:04:39.034791 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.034872 kubelet[3382]: I0527 17:04:39.034820 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cad5d59-f409-4303-a7b2-86efde3e9bab-kubelet-dir\") pod \"csi-node-driver-tmb64\" (UID: \"9cad5d59-f409-4303-a7b2-86efde3e9bab\") " pod="calico-system/csi-node-driver-tmb64" May 27 17:04:39.035171 kubelet[3382]: E0527 17:04:39.035157 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.035275 kubelet[3382]: W0527 17:04:39.035244 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.035275 kubelet[3382]: E0527 17:04:39.035259 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.035532 kubelet[3382]: I0527 17:04:39.035518 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9cad5d59-f409-4303-a7b2-86efde3e9bab-varrun\") pod \"csi-node-driver-tmb64\" (UID: \"9cad5d59-f409-4303-a7b2-86efde3e9bab\") " pod="calico-system/csi-node-driver-tmb64" May 27 17:04:39.035672 kubelet[3382]: E0527 17:04:39.035647 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.035672 kubelet[3382]: W0527 17:04:39.035656 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.035672 kubelet[3382]: E0527 17:04:39.035664 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.036267 kubelet[3382]: E0527 17:04:39.036254 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.036420 kubelet[3382]: W0527 17:04:39.036340 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.036420 kubelet[3382]: E0527 17:04:39.036355 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.036850 kubelet[3382]: E0527 17:04:39.036780 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.037321 kubelet[3382]: W0527 17:04:39.037232 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.037321 kubelet[3382]: E0527 17:04:39.037249 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.037321 kubelet[3382]: I0527 17:04:39.037277 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9cad5d59-f409-4303-a7b2-86efde3e9bab-registration-dir\") pod \"csi-node-driver-tmb64\" (UID: \"9cad5d59-f409-4303-a7b2-86efde3e9bab\") " pod="calico-system/csi-node-driver-tmb64" May 27 17:04:39.038211 kubelet[3382]: E0527 17:04:39.037512 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.038211 kubelet[3382]: W0527 17:04:39.037522 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.038211 kubelet[3382]: E0527 17:04:39.037531 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.038211 kubelet[3382]: I0527 17:04:39.037613 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9cad5d59-f409-4303-a7b2-86efde3e9bab-socket-dir\") pod \"csi-node-driver-tmb64\" (UID: \"9cad5d59-f409-4303-a7b2-86efde3e9bab\") " pod="calico-system/csi-node-driver-tmb64" May 27 17:04:39.038211 kubelet[3382]: E0527 17:04:39.037693 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.038211 kubelet[3382]: W0527 17:04:39.037698 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.038211 kubelet[3382]: E0527 17:04:39.037705 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.038211 kubelet[3382]: E0527 17:04:39.037818 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.038211 kubelet[3382]: W0527 17:04:39.037823 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.038353 kubelet[3382]: E0527 17:04:39.037829 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.038642 kubelet[3382]: E0527 17:04:39.038414 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.038642 kubelet[3382]: W0527 17:04:39.038427 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.038642 kubelet[3382]: E0527 17:04:39.038437 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.038642 kubelet[3382]: I0527 17:04:39.038465 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s45f9\" (UniqueName: \"kubernetes.io/projected/9cad5d59-f409-4303-a7b2-86efde3e9bab-kube-api-access-s45f9\") pod \"csi-node-driver-tmb64\" (UID: \"9cad5d59-f409-4303-a7b2-86efde3e9bab\") " pod="calico-system/csi-node-driver-tmb64" May 27 17:04:39.038642 kubelet[3382]: E0527 17:04:39.038616 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.038642 kubelet[3382]: W0527 17:04:39.038625 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.038642 kubelet[3382]: E0527 17:04:39.038632 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.039042 kubelet[3382]: E0527 17:04:39.039009 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.039042 kubelet[3382]: W0527 17:04:39.039022 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.039042 kubelet[3382]: E0527 17:04:39.039031 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.039453 kubelet[3382]: E0527 17:04:39.039440 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.039541 kubelet[3382]: W0527 17:04:39.039516 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.039541 kubelet[3382]: E0527 17:04:39.039532 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.039750 kubelet[3382]: E0527 17:04:39.039738 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.039836 kubelet[3382]: W0527 17:04:39.039811 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.039836 kubelet[3382]: E0527 17:04:39.039825 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.040276 kubelet[3382]: E0527 17:04:39.040180 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.040276 kubelet[3382]: W0527 17:04:39.040192 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.040276 kubelet[3382]: E0527 17:04:39.040202 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.041077 kubelet[3382]: E0527 17:04:39.040707 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.041077 kubelet[3382]: W0527 17:04:39.041041 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.041077 kubelet[3382]: E0527 17:04:39.041060 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.063241 containerd[1871]: time="2025-05-27T17:04:39.063188403Z" level=info msg="connecting to shim 5e972b2b36841a55671e4bb6f5427687b564343b3583581bf07039f8afac07d9" address="unix:///run/containerd/s/ad0ab51fb5cc58d080110e2fa1b57676c483bf88980d37f4e177dcb375747571" namespace=k8s.io protocol=ttrpc version=3 May 27 17:04:39.092303 systemd[1]: Started cri-containerd-5e972b2b36841a55671e4bb6f5427687b564343b3583581bf07039f8afac07d9.scope - libcontainer container 5e972b2b36841a55671e4bb6f5427687b564343b3583581bf07039f8afac07d9. May 27 17:04:39.121715 containerd[1871]: time="2025-05-27T17:04:39.121548652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-56nqq,Uid:87aa832e-a4a6-47b8-a7ee-7abaf7f68657,Namespace:calico-system,Attempt:0,}" May 27 17:04:39.146412 kubelet[3382]: E0527 17:04:39.146152 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.146412 kubelet[3382]: W0527 17:04:39.146207 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.146412 kubelet[3382]: E0527 17:04:39.146231 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.147114 kubelet[3382]: E0527 17:04:39.147088 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.147467 kubelet[3382]: W0527 17:04:39.147429 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.147843 kubelet[3382]: E0527 17:04:39.147621 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.149500 kubelet[3382]: E0527 17:04:39.148923 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.150764 kubelet[3382]: W0527 17:04:39.150514 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.150764 kubelet[3382]: E0527 17:04:39.150541 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.150903 kubelet[3382]: E0527 17:04:39.150890 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.150953 kubelet[3382]: W0527 17:04:39.150944 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.151022 kubelet[3382]: E0527 17:04:39.151011 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.151278 kubelet[3382]: E0527 17:04:39.151239 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.151665 kubelet[3382]: W0527 17:04:39.151646 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.152015 kubelet[3382]: E0527 17:04:39.151762 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.152531 kubelet[3382]: E0527 17:04:39.152300 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.152723 kubelet[3382]: W0527 17:04:39.152604 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.152856 kubelet[3382]: E0527 17:04:39.152836 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.153214 kubelet[3382]: E0527 17:04:39.153178 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.153214 kubelet[3382]: W0527 17:04:39.153191 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.153214 kubelet[3382]: E0527 17:04:39.153202 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.154224 containerd[1871]: time="2025-05-27T17:04:39.153399518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f4595556c-dmt7z,Uid:990473f5-cd40-4e99-8e98-b839afda97b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"5e972b2b36841a55671e4bb6f5427687b564343b3583581bf07039f8afac07d9\"" May 27 17:04:39.155068 kubelet[3382]: E0527 17:04:39.154415 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.155068 kubelet[3382]: W0527 17:04:39.154425 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.155068 kubelet[3382]: E0527 17:04:39.154437 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.155296 kubelet[3382]: E0527 17:04:39.155280 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.155479 kubelet[3382]: W0527 17:04:39.155359 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.155479 kubelet[3382]: E0527 17:04:39.155376 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.155593 kubelet[3382]: E0527 17:04:39.155579 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.156056 containerd[1871]: time="2025-05-27T17:04:39.155978535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 17:04:39.156210 kubelet[3382]: W0527 17:04:39.156192 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.156362 kubelet[3382]: E0527 17:04:39.156250 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.156467 kubelet[3382]: E0527 17:04:39.156455 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.156514 kubelet[3382]: W0527 17:04:39.156503 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.156561 kubelet[3382]: E0527 17:04:39.156550 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.156859 kubelet[3382]: E0527 17:04:39.156744 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.156859 kubelet[3382]: W0527 17:04:39.156755 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.156859 kubelet[3382]: E0527 17:04:39.156763 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.158024 kubelet[3382]: E0527 17:04:39.158002 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.158188 kubelet[3382]: W0527 17:04:39.158172 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.158388 kubelet[3382]: E0527 17:04:39.158246 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.158550 kubelet[3382]: E0527 17:04:39.158538 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.158609 kubelet[3382]: W0527 17:04:39.158599 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.158654 kubelet[3382]: E0527 17:04:39.158643 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.158918 kubelet[3382]: E0527 17:04:39.158907 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.159021 kubelet[3382]: W0527 17:04:39.158980 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.159122 kubelet[3382]: E0527 17:04:39.159073 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.159593 kubelet[3382]: E0527 17:04:39.159527 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.159593 kubelet[3382]: W0527 17:04:39.159540 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.159593 kubelet[3382]: E0527 17:04:39.159551 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.159964 kubelet[3382]: E0527 17:04:39.159873 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.159964 kubelet[3382]: W0527 17:04:39.159883 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.159964 kubelet[3382]: E0527 17:04:39.159892 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.160240 kubelet[3382]: E0527 17:04:39.160226 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.160357 kubelet[3382]: W0527 17:04:39.160296 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.160357 kubelet[3382]: E0527 17:04:39.160311 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.161347 kubelet[3382]: E0527 17:04:39.161164 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.161347 kubelet[3382]: W0527 17:04:39.161182 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.161347 kubelet[3382]: E0527 17:04:39.161194 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.161603 kubelet[3382]: E0527 17:04:39.161507 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.161603 kubelet[3382]: W0527 17:04:39.161520 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.162026 kubelet[3382]: E0527 17:04:39.161530 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.163565 kubelet[3382]: E0527 17:04:39.162175 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.163565 kubelet[3382]: W0527 17:04:39.162188 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.163565 kubelet[3382]: E0527 17:04:39.162198 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.163961 kubelet[3382]: E0527 17:04:39.163711 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.163961 kubelet[3382]: W0527 17:04:39.163726 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.163961 kubelet[3382]: E0527 17:04:39.163737 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.164362 kubelet[3382]: E0527 17:04:39.164346 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.164933 kubelet[3382]: W0527 17:04:39.164770 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.164933 kubelet[3382]: E0527 17:04:39.164796 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.165482 kubelet[3382]: E0527 17:04:39.165337 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.165482 kubelet[3382]: W0527 17:04:39.165351 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.165482 kubelet[3382]: E0527 17:04:39.165364 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.165622 kubelet[3382]: E0527 17:04:39.165611 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.165672 kubelet[3382]: W0527 17:04:39.165661 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.165718 kubelet[3382]: E0527 17:04:39.165708 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.179880 kubelet[3382]: E0527 17:04:39.179847 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:39.180107 kubelet[3382]: W0527 17:04:39.180035 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:39.180107 kubelet[3382]: E0527 17:04:39.180066 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:39.191122 containerd[1871]: time="2025-05-27T17:04:39.191066817Z" level=info msg="connecting to shim da92eb399584ad62e4b70379b6e8959ee2754ea357b857e3fc8407cd36d038c9" address="unix:///run/containerd/s/28369acd563e0181034220caf39a34d0c20057a2a22191e19d2b66fb9fb78aa4" namespace=k8s.io protocol=ttrpc version=3 May 27 17:04:39.220220 systemd[1]: Started cri-containerd-da92eb399584ad62e4b70379b6e8959ee2754ea357b857e3fc8407cd36d038c9.scope - libcontainer container da92eb399584ad62e4b70379b6e8959ee2754ea357b857e3fc8407cd36d038c9. May 27 17:04:39.278223 containerd[1871]: time="2025-05-27T17:04:39.278127362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-56nqq,Uid:87aa832e-a4a6-47b8-a7ee-7abaf7f68657,Namespace:calico-system,Attempt:0,} returns sandbox id \"da92eb399584ad62e4b70379b6e8959ee2754ea357b857e3fc8407cd36d038c9\"" May 27 17:04:40.160175 kubelet[3382]: E0527 17:04:40.159586 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tmb64" podUID="9cad5d59-f409-4303-a7b2-86efde3e9bab" May 27 17:04:40.521340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2136546356.mount: Deactivated successfully. May 27 17:04:41.011909 containerd[1871]: time="2025-05-27T17:04:41.011843950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:41.019402 containerd[1871]: time="2025-05-27T17:04:41.019302880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=33020269" May 27 17:04:41.026379 containerd[1871]: time="2025-05-27T17:04:41.026313058Z" level=info msg="ImageCreate event name:\"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:41.032412 containerd[1871]: time="2025-05-27T17:04:41.032337876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:41.032872 containerd[1871]: time="2025-05-27T17:04:41.032665772Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"33020123\" in 1.876637541s" May 27 17:04:41.032872 containerd[1871]: time="2025-05-27T17:04:41.032698788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\"" May 27 17:04:41.035387 containerd[1871]: time="2025-05-27T17:04:41.033807980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 17:04:41.057158 containerd[1871]: time="2025-05-27T17:04:41.056983539Z" level=info msg="CreateContainer within sandbox \"5e972b2b36841a55671e4bb6f5427687b564343b3583581bf07039f8afac07d9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 17:04:41.092102 containerd[1871]: time="2025-05-27T17:04:41.092051301Z" level=info msg="Container 285337bc088196d6625c1501b9c22bd1cb77fb1b60877ed27446ac24e88e1c92: CDI devices from CRI Config.CDIDevices: []" May 27 17:04:41.119783 containerd[1871]: time="2025-05-27T17:04:41.119731125Z" level=info msg="CreateContainer within sandbox \"5e972b2b36841a55671e4bb6f5427687b564343b3583581bf07039f8afac07d9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"285337bc088196d6625c1501b9c22bd1cb77fb1b60877ed27446ac24e88e1c92\"" May 27 17:04:41.121688 containerd[1871]: time="2025-05-27T17:04:41.121573526Z" level=info msg="StartContainer for \"285337bc088196d6625c1501b9c22bd1cb77fb1b60877ed27446ac24e88e1c92\"" May 27 17:04:41.123918 containerd[1871]: time="2025-05-27T17:04:41.123514510Z" level=info msg="connecting to shim 285337bc088196d6625c1501b9c22bd1cb77fb1b60877ed27446ac24e88e1c92" address="unix:///run/containerd/s/ad0ab51fb5cc58d080110e2fa1b57676c483bf88980d37f4e177dcb375747571" protocol=ttrpc version=3 May 27 17:04:41.145950 systemd[1]: Started cri-containerd-285337bc088196d6625c1501b9c22bd1cb77fb1b60877ed27446ac24e88e1c92.scope - libcontainer container 285337bc088196d6625c1501b9c22bd1cb77fb1b60877ed27446ac24e88e1c92. May 27 17:04:41.187158 containerd[1871]: time="2025-05-27T17:04:41.186755217Z" level=info msg="StartContainer for \"285337bc088196d6625c1501b9c22bd1cb77fb1b60877ed27446ac24e88e1c92\" returns successfully" May 27 17:04:41.249846 kubelet[3382]: E0527 17:04:41.249680 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.249846 kubelet[3382]: W0527 17:04:41.249838 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.250766 kubelet[3382]: E0527 17:04:41.249864 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.250976 kubelet[3382]: E0527 17:04:41.250947 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.251165 kubelet[3382]: W0527 17:04:41.251017 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.251204 kubelet[3382]: E0527 17:04:41.251166 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.251650 kubelet[3382]: E0527 17:04:41.251621 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.251650 kubelet[3382]: W0527 17:04:41.251641 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.251650 kubelet[3382]: E0527 17:04:41.251654 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.252305 kubelet[3382]: E0527 17:04:41.252120 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.252305 kubelet[3382]: W0527 17:04:41.252138 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.252305 kubelet[3382]: E0527 17:04:41.252150 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.252553 kubelet[3382]: E0527 17:04:41.252531 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.252553 kubelet[3382]: W0527 17:04:41.252546 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.252611 kubelet[3382]: E0527 17:04:41.252557 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.252916 kubelet[3382]: E0527 17:04:41.252859 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.252916 kubelet[3382]: W0527 17:04:41.252874 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.253096 kubelet[3382]: E0527 17:04:41.252973 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.253387 kubelet[3382]: E0527 17:04:41.253219 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.253387 kubelet[3382]: W0527 17:04:41.253232 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.253387 kubelet[3382]: E0527 17:04:41.253244 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.253659 kubelet[3382]: E0527 17:04:41.253636 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.253659 kubelet[3382]: W0527 17:04:41.253651 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.253659 kubelet[3382]: E0527 17:04:41.253661 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.254094 kubelet[3382]: E0527 17:04:41.254072 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.254094 kubelet[3382]: W0527 17:04:41.254087 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.254094 kubelet[3382]: E0527 17:04:41.254097 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.254283 kubelet[3382]: E0527 17:04:41.254266 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.254327 kubelet[3382]: W0527 17:04:41.254277 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.254327 kubelet[3382]: E0527 17:04:41.254305 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.254450 kubelet[3382]: E0527 17:04:41.254432 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.254450 kubelet[3382]: W0527 17:04:41.254442 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.254450 kubelet[3382]: E0527 17:04:41.254449 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.254610 kubelet[3382]: E0527 17:04:41.254590 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.254610 kubelet[3382]: W0527 17:04:41.254600 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.254610 kubelet[3382]: E0527 17:04:41.254607 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.254740 kubelet[3382]: E0527 17:04:41.254723 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.254740 kubelet[3382]: W0527 17:04:41.254735 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.254773 kubelet[3382]: E0527 17:04:41.254741 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.254896 kubelet[3382]: E0527 17:04:41.254882 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.254896 kubelet[3382]: W0527 17:04:41.254891 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.255007 kubelet[3382]: E0527 17:04:41.254898 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.255157 kubelet[3382]: E0527 17:04:41.255139 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.255157 kubelet[3382]: W0527 17:04:41.255153 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.255203 kubelet[3382]: E0527 17:04:41.255163 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.269163 kubelet[3382]: E0527 17:04:41.268711 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.269163 kubelet[3382]: W0527 17:04:41.268743 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.269163 kubelet[3382]: E0527 17:04:41.268765 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.269327 kubelet[3382]: E0527 17:04:41.269293 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.269327 kubelet[3382]: W0527 17:04:41.269307 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.269327 kubelet[3382]: E0527 17:04:41.269320 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.269918 kubelet[3382]: E0527 17:04:41.269644 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.269918 kubelet[3382]: W0527 17:04:41.269661 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.269918 kubelet[3382]: E0527 17:04:41.269672 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.270094 kubelet[3382]: E0527 17:04:41.270071 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.270094 kubelet[3382]: W0527 17:04:41.270090 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.270145 kubelet[3382]: E0527 17:04:41.270102 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.270966 kubelet[3382]: E0527 17:04:41.270443 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.270966 kubelet[3382]: W0527 17:04:41.270460 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.270966 kubelet[3382]: E0527 17:04:41.270471 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.270966 kubelet[3382]: E0527 17:04:41.270817 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.270966 kubelet[3382]: W0527 17:04:41.270830 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.270966 kubelet[3382]: E0527 17:04:41.270842 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.271296 kubelet[3382]: E0527 17:04:41.271173 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.271296 kubelet[3382]: W0527 17:04:41.271189 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.271296 kubelet[3382]: E0527 17:04:41.271199 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.272446 kubelet[3382]: E0527 17:04:41.272179 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.272446 kubelet[3382]: W0527 17:04:41.272196 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.272446 kubelet[3382]: E0527 17:04:41.272208 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.272833 kubelet[3382]: E0527 17:04:41.272734 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.273117 kubelet[3382]: W0527 17:04:41.272897 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.273117 kubelet[3382]: E0527 17:04:41.272917 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.273343 kubelet[3382]: E0527 17:04:41.273327 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.273680 kubelet[3382]: W0527 17:04:41.273462 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.273680 kubelet[3382]: E0527 17:04:41.273506 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.275399 kubelet[3382]: E0527 17:04:41.275087 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.275399 kubelet[3382]: W0527 17:04:41.275107 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.275399 kubelet[3382]: E0527 17:04:41.275124 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.275602 kubelet[3382]: E0527 17:04:41.275586 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.275776 kubelet[3382]: W0527 17:04:41.275656 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.275776 kubelet[3382]: E0527 17:04:41.275674 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.275898 kubelet[3382]: E0527 17:04:41.275886 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.275951 kubelet[3382]: W0527 17:04:41.275941 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.276009 kubelet[3382]: E0527 17:04:41.275998 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.276550 kubelet[3382]: E0527 17:04:41.276205 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.276550 kubelet[3382]: W0527 17:04:41.276216 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.276550 kubelet[3382]: E0527 17:04:41.276227 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.277099 kubelet[3382]: E0527 17:04:41.277080 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.279080 kubelet[3382]: W0527 17:04:41.279037 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.279080 kubelet[3382]: E0527 17:04:41.279075 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.280238 kubelet[3382]: E0527 17:04:41.280214 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.280238 kubelet[3382]: W0527 17:04:41.280234 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.280327 kubelet[3382]: E0527 17:04:41.280251 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.280632 kubelet[3382]: E0527 17:04:41.280610 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.280632 kubelet[3382]: W0527 17:04:41.280629 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.280690 kubelet[3382]: E0527 17:04:41.280641 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:41.281719 kubelet[3382]: E0527 17:04:41.281688 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:41.281719 kubelet[3382]: W0527 17:04:41.281709 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:41.281826 kubelet[3382]: E0527 17:04:41.281808 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.160701 kubelet[3382]: E0527 17:04:42.160552 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tmb64" podUID="9cad5d59-f409-4303-a7b2-86efde3e9bab" May 27 17:04:42.228616 kubelet[3382]: I0527 17:04:42.228582 3382 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:04:42.263506 kubelet[3382]: E0527 17:04:42.263476 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.263506 kubelet[3382]: W0527 17:04:42.263496 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.263506 kubelet[3382]: E0527 17:04:42.263515 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.263960 kubelet[3382]: E0527 17:04:42.263648 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.263960 kubelet[3382]: W0527 17:04:42.263655 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.263960 kubelet[3382]: E0527 17:04:42.263662 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.263960 kubelet[3382]: E0527 17:04:42.263760 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.263960 kubelet[3382]: W0527 17:04:42.263773 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.263960 kubelet[3382]: E0527 17:04:42.263779 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.263960 kubelet[3382]: E0527 17:04:42.263879 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.263960 kubelet[3382]: W0527 17:04:42.263883 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.263960 kubelet[3382]: E0527 17:04:42.263888 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.264199 kubelet[3382]: E0527 17:04:42.264177 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.264199 kubelet[3382]: W0527 17:04:42.264194 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.264232 kubelet[3382]: E0527 17:04:42.264202 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.264372 kubelet[3382]: E0527 17:04:42.264359 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.264372 kubelet[3382]: W0527 17:04:42.264370 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.264422 kubelet[3382]: E0527 17:04:42.264378 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.264505 kubelet[3382]: E0527 17:04:42.264493 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.264505 kubelet[3382]: W0527 17:04:42.264501 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.264535 kubelet[3382]: E0527 17:04:42.264507 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.264650 kubelet[3382]: E0527 17:04:42.264639 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.264650 kubelet[3382]: W0527 17:04:42.264646 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.264699 kubelet[3382]: E0527 17:04:42.264652 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.264807 kubelet[3382]: E0527 17:04:42.264794 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.264807 kubelet[3382]: W0527 17:04:42.264802 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.264846 kubelet[3382]: E0527 17:04:42.264810 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.264927 kubelet[3382]: E0527 17:04:42.264915 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.264927 kubelet[3382]: W0527 17:04:42.264923 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.264961 kubelet[3382]: E0527 17:04:42.264928 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.265054 kubelet[3382]: E0527 17:04:42.265037 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.265054 kubelet[3382]: W0527 17:04:42.265050 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.265084 kubelet[3382]: E0527 17:04:42.265056 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.265159 kubelet[3382]: E0527 17:04:42.265149 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.265159 kubelet[3382]: W0527 17:04:42.265156 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.265190 kubelet[3382]: E0527 17:04:42.265160 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.265260 kubelet[3382]: E0527 17:04:42.265250 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.265260 kubelet[3382]: W0527 17:04:42.265258 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.265295 kubelet[3382]: E0527 17:04:42.265271 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.265364 kubelet[3382]: E0527 17:04:42.265353 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.265364 kubelet[3382]: W0527 17:04:42.265361 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.265393 kubelet[3382]: E0527 17:04:42.265367 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.265472 kubelet[3382]: E0527 17:04:42.265461 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.265472 kubelet[3382]: W0527 17:04:42.265467 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.265472 kubelet[3382]: E0527 17:04:42.265472 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.277962 kubelet[3382]: E0527 17:04:42.277933 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.277962 kubelet[3382]: W0527 17:04:42.277958 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.278096 kubelet[3382]: E0527 17:04:42.277973 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.278173 kubelet[3382]: E0527 17:04:42.278160 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.278173 kubelet[3382]: W0527 17:04:42.278170 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.278217 kubelet[3382]: E0527 17:04:42.278178 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.278345 kubelet[3382]: E0527 17:04:42.278333 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.278345 kubelet[3382]: W0527 17:04:42.278341 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.278399 kubelet[3382]: E0527 17:04:42.278349 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.278544 kubelet[3382]: E0527 17:04:42.278530 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.278544 kubelet[3382]: W0527 17:04:42.278543 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.278595 kubelet[3382]: E0527 17:04:42.278550 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.278680 kubelet[3382]: E0527 17:04:42.278671 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.278680 kubelet[3382]: W0527 17:04:42.278679 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.278730 kubelet[3382]: E0527 17:04:42.278685 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.278779 kubelet[3382]: E0527 17:04:42.278769 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.278806 kubelet[3382]: W0527 17:04:42.278783 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.278806 kubelet[3382]: E0527 17:04:42.278789 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.278952 kubelet[3382]: E0527 17:04:42.278939 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.278952 kubelet[3382]: W0527 17:04:42.278948 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.279027 kubelet[3382]: E0527 17:04:42.278955 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.279265 kubelet[3382]: E0527 17:04:42.279247 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.279436 kubelet[3382]: W0527 17:04:42.279330 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.279436 kubelet[3382]: E0527 17:04:42.279347 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.279559 kubelet[3382]: E0527 17:04:42.279548 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.279618 kubelet[3382]: W0527 17:04:42.279608 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.279748 kubelet[3382]: E0527 17:04:42.279653 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.279843 kubelet[3382]: E0527 17:04:42.279832 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.279894 kubelet[3382]: W0527 17:04:42.279884 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.279942 kubelet[3382]: E0527 17:04:42.279930 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.280243 kubelet[3382]: E0527 17:04:42.280151 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.280243 kubelet[3382]: W0527 17:04:42.280162 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.280243 kubelet[3382]: E0527 17:04:42.280171 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.280400 kubelet[3382]: E0527 17:04:42.280388 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.280452 kubelet[3382]: W0527 17:04:42.280442 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.280495 kubelet[3382]: E0527 17:04:42.280485 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.280874 kubelet[3382]: E0527 17:04:42.280700 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.280874 kubelet[3382]: W0527 17:04:42.280712 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.280874 kubelet[3382]: E0527 17:04:42.280721 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.280976 kubelet[3382]: E0527 17:04:42.280908 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.280976 kubelet[3382]: W0527 17:04:42.280918 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.280976 kubelet[3382]: E0527 17:04:42.280926 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.281063 kubelet[3382]: E0527 17:04:42.281049 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.281063 kubelet[3382]: W0527 17:04:42.281060 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.281107 kubelet[3382]: E0527 17:04:42.281068 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.281196 kubelet[3382]: E0527 17:04:42.281176 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.281196 kubelet[3382]: W0527 17:04:42.281187 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.281196 kubelet[3382]: E0527 17:04:42.281193 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.281430 kubelet[3382]: E0527 17:04:42.281416 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.281617 kubelet[3382]: W0527 17:04:42.281491 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.281617 kubelet[3382]: E0527 17:04:42.281511 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:42.281733 kubelet[3382]: E0527 17:04:42.281723 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:04:42.281783 kubelet[3382]: W0527 17:04:42.281774 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:04:42.281842 kubelet[3382]: E0527 17:04:42.281819 3382 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:04:43.005617 containerd[1871]: time="2025-05-27T17:04:43.005092998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:43.011465 containerd[1871]: time="2025-05-27T17:04:43.011420464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4264304" May 27 17:04:43.021043 containerd[1871]: time="2025-05-27T17:04:43.020953643Z" level=info msg="ImageCreate event name:\"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:43.030665 containerd[1871]: time="2025-05-27T17:04:43.030592302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:43.031267 containerd[1871]: time="2025-05-27T17:04:43.031093022Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5633505\" in 1.995750977s" May 27 17:04:43.031267 containerd[1871]: time="2025-05-27T17:04:43.031122326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\"" May 27 17:04:43.042171 containerd[1871]: time="2025-05-27T17:04:43.042134097Z" level=info msg="CreateContainer within sandbox \"da92eb399584ad62e4b70379b6e8959ee2754ea357b857e3fc8407cd36d038c9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 17:04:43.083334 containerd[1871]: time="2025-05-27T17:04:43.083273341Z" level=info msg="Container 6a9173ccc78572dc6dcab0bf17f1a8895553978540e59083b4e201dd4261a091: CDI devices from CRI Config.CDIDevices: []" May 27 17:04:43.109539 containerd[1871]: time="2025-05-27T17:04:43.109468765Z" level=info msg="CreateContainer within sandbox \"da92eb399584ad62e4b70379b6e8959ee2754ea357b857e3fc8407cd36d038c9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6a9173ccc78572dc6dcab0bf17f1a8895553978540e59083b4e201dd4261a091\"" May 27 17:04:43.111058 containerd[1871]: time="2025-05-27T17:04:43.110429029Z" level=info msg="StartContainer for \"6a9173ccc78572dc6dcab0bf17f1a8895553978540e59083b4e201dd4261a091\"" May 27 17:04:43.112707 containerd[1871]: time="2025-05-27T17:04:43.112671062Z" level=info msg="connecting to shim 6a9173ccc78572dc6dcab0bf17f1a8895553978540e59083b4e201dd4261a091" address="unix:///run/containerd/s/28369acd563e0181034220caf39a34d0c20057a2a22191e19d2b66fb9fb78aa4" protocol=ttrpc version=3 May 27 17:04:43.129173 systemd[1]: Started cri-containerd-6a9173ccc78572dc6dcab0bf17f1a8895553978540e59083b4e201dd4261a091.scope - libcontainer container 6a9173ccc78572dc6dcab0bf17f1a8895553978540e59083b4e201dd4261a091. May 27 17:04:43.165380 containerd[1871]: time="2025-05-27T17:04:43.165332589Z" level=info msg="StartContainer for \"6a9173ccc78572dc6dcab0bf17f1a8895553978540e59083b4e201dd4261a091\" returns successfully" May 27 17:04:43.172007 systemd[1]: cri-containerd-6a9173ccc78572dc6dcab0bf17f1a8895553978540e59083b4e201dd4261a091.scope: Deactivated successfully. May 27 17:04:43.177141 containerd[1871]: time="2025-05-27T17:04:43.177056529Z" level=info msg="received exit event container_id:\"6a9173ccc78572dc6dcab0bf17f1a8895553978540e59083b4e201dd4261a091\" id:\"6a9173ccc78572dc6dcab0bf17f1a8895553978540e59083b4e201dd4261a091\" pid:4078 exited_at:{seconds:1748365483 nanos:176591033}" May 27 17:04:43.177296 containerd[1871]: time="2025-05-27T17:04:43.177252921Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a9173ccc78572dc6dcab0bf17f1a8895553978540e59083b4e201dd4261a091\" id:\"6a9173ccc78572dc6dcab0bf17f1a8895553978540e59083b4e201dd4261a091\" pid:4078 exited_at:{seconds:1748365483 nanos:176591033}" May 27 17:04:43.193465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a9173ccc78572dc6dcab0bf17f1a8895553978540e59083b4e201dd4261a091-rootfs.mount: Deactivated successfully. May 27 17:04:43.252024 kubelet[3382]: I0527 17:04:43.251086 3382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f4595556c-dmt7z" podStartSLOduration=3.373117104 podStartE2EDuration="5.251070534s" podCreationTimestamp="2025-05-27 17:04:38 +0000 UTC" firstStartedPulling="2025-05-27 17:04:39.155731158 +0000 UTC m=+17.175245395" lastFinishedPulling="2025-05-27 17:04:41.033684588 +0000 UTC m=+19.053198825" observedRunningTime="2025-05-27 17:04:41.246150938 +0000 UTC m=+19.265665175" watchObservedRunningTime="2025-05-27 17:04:43.251070534 +0000 UTC m=+21.270584771" May 27 17:04:44.159494 kubelet[3382]: E0527 17:04:44.159295 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tmb64" podUID="9cad5d59-f409-4303-a7b2-86efde3e9bab" May 27 17:04:45.243354 containerd[1871]: time="2025-05-27T17:04:45.243293014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 17:04:46.159803 kubelet[3382]: E0527 17:04:46.159569 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tmb64" podUID="9cad5d59-f409-4303-a7b2-86efde3e9bab" May 27 17:04:48.159958 kubelet[3382]: E0527 17:04:48.159736 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tmb64" podUID="9cad5d59-f409-4303-a7b2-86efde3e9bab" May 27 17:04:48.931040 containerd[1871]: time="2025-05-27T17:04:48.930865446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:48.933832 containerd[1871]: time="2025-05-27T17:04:48.933666239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=65748976" May 27 17:04:48.939319 containerd[1871]: time="2025-05-27T17:04:48.939262287Z" level=info msg="ImageCreate event name:\"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:48.943720 containerd[1871]: time="2025-05-27T17:04:48.943445840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:04:48.943974 containerd[1871]: time="2025-05-27T17:04:48.943941264Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"67118217\" in 3.700610666s" May 27 17:04:48.944146 containerd[1871]: time="2025-05-27T17:04:48.943976008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\"" May 27 17:04:48.954663 containerd[1871]: time="2025-05-27T17:04:48.953153793Z" level=info msg="CreateContainer within sandbox \"da92eb399584ad62e4b70379b6e8959ee2754ea357b857e3fc8407cd36d038c9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 17:04:48.992845 containerd[1871]: time="2025-05-27T17:04:48.992593095Z" level=info msg="Container 947ec0b453192ef8c8e7f0a7c8af0cfdf0a2a12347f1793f8e01889b5f69ea5d: CDI devices from CRI Config.CDIDevices: []" May 27 17:04:49.039447 containerd[1871]: time="2025-05-27T17:04:49.039396421Z" level=info msg="CreateContainer within sandbox \"da92eb399584ad62e4b70379b6e8959ee2754ea357b857e3fc8407cd36d038c9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"947ec0b453192ef8c8e7f0a7c8af0cfdf0a2a12347f1793f8e01889b5f69ea5d\"" May 27 17:04:49.042107 containerd[1871]: time="2025-05-27T17:04:49.042060662Z" level=info msg="StartContainer for \"947ec0b453192ef8c8e7f0a7c8af0cfdf0a2a12347f1793f8e01889b5f69ea5d\"" May 27 17:04:49.043411 containerd[1871]: time="2025-05-27T17:04:49.043379478Z" level=info msg="connecting to shim 947ec0b453192ef8c8e7f0a7c8af0cfdf0a2a12347f1793f8e01889b5f69ea5d" address="unix:///run/containerd/s/28369acd563e0181034220caf39a34d0c20057a2a22191e19d2b66fb9fb78aa4" protocol=ttrpc version=3 May 27 17:04:49.065195 systemd[1]: Started cri-containerd-947ec0b453192ef8c8e7f0a7c8af0cfdf0a2a12347f1793f8e01889b5f69ea5d.scope - libcontainer container 947ec0b453192ef8c8e7f0a7c8af0cfdf0a2a12347f1793f8e01889b5f69ea5d. May 27 17:04:49.101429 containerd[1871]: time="2025-05-27T17:04:49.101385206Z" level=info msg="StartContainer for \"947ec0b453192ef8c8e7f0a7c8af0cfdf0a2a12347f1793f8e01889b5f69ea5d\" returns successfully" May 27 17:04:50.160050 kubelet[3382]: E0527 17:04:50.159219 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tmb64" podUID="9cad5d59-f409-4303-a7b2-86efde3e9bab" May 27 17:04:50.215929 containerd[1871]: time="2025-05-27T17:04:50.215746450Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:04:50.218252 systemd[1]: cri-containerd-947ec0b453192ef8c8e7f0a7c8af0cfdf0a2a12347f1793f8e01889b5f69ea5d.scope: Deactivated successfully. May 27 17:04:50.218852 systemd[1]: cri-containerd-947ec0b453192ef8c8e7f0a7c8af0cfdf0a2a12347f1793f8e01889b5f69ea5d.scope: Consumed 358ms CPU time, 188.7M memory peak, 165.5M written to disk. May 27 17:04:50.221218 containerd[1871]: time="2025-05-27T17:04:50.221068211Z" level=info msg="TaskExit event in podsandbox handler container_id:\"947ec0b453192ef8c8e7f0a7c8af0cfdf0a2a12347f1793f8e01889b5f69ea5d\" id:\"947ec0b453192ef8c8e7f0a7c8af0cfdf0a2a12347f1793f8e01889b5f69ea5d\" pid:4137 exited_at:{seconds:1748365490 nanos:220606779}" May 27 17:04:50.221218 containerd[1871]: time="2025-05-27T17:04:50.221112307Z" level=info msg="received exit event container_id:\"947ec0b453192ef8c8e7f0a7c8af0cfdf0a2a12347f1793f8e01889b5f69ea5d\" id:\"947ec0b453192ef8c8e7f0a7c8af0cfdf0a2a12347f1793f8e01889b5f69ea5d\" pid:4137 exited_at:{seconds:1748365490 nanos:220606779}" May 27 17:04:50.235514 kubelet[3382]: I0527 17:04:50.235284 3382 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 17:04:50.247540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-947ec0b453192ef8c8e7f0a7c8af0cfdf0a2a12347f1793f8e01889b5f69ea5d-rootfs.mount: Deactivated successfully. May 27 17:04:51.074943 systemd[1]: Created slice kubepods-burstable-pod9e4be9e5_106a_4380_9190_17d1402c83e7.slice - libcontainer container kubepods-burstable-pod9e4be9e5_106a_4380_9190_17d1402c83e7.slice. May 27 17:04:51.101250 systemd[1]: Created slice kubepods-besteffort-pod6168d23d_915e_4571_a018_ac172eb9454e.slice - libcontainer container kubepods-besteffort-pod6168d23d_915e_4571_a018_ac172eb9454e.slice. May 27 17:04:51.115687 systemd[1]: Created slice kubepods-besteffort-pod0f0ca896_7cec_4519_bde0_47ca41dbc403.slice - libcontainer container kubepods-besteffort-pod0f0ca896_7cec_4519_bde0_47ca41dbc403.slice. May 27 17:04:51.127855 systemd[1]: Created slice kubepods-besteffort-pod1ed3902b_f80a_4d5d_a3a8_c425965d5219.slice - libcontainer container kubepods-besteffort-pod1ed3902b_f80a_4d5d_a3a8_c425965d5219.slice. May 27 17:04:51.140048 systemd[1]: Created slice kubepods-burstable-pod30f36697_5f7e_4ded_934b_1780a0c861a4.slice - libcontainer container kubepods-burstable-pod30f36697_5f7e_4ded_934b_1780a0c861a4.slice. May 27 17:04:51.146574 systemd[1]: Created slice kubepods-besteffort-pod3da35495_9316_4296_a7ef_679ef4f4be92.slice - libcontainer container kubepods-besteffort-pod3da35495_9316_4296_a7ef_679ef4f4be92.slice. May 27 17:04:51.153032 kubelet[3382]: I0527 17:04:51.152534 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6168d23d-915e-4571-a018-ac172eb9454e-whisker-backend-key-pair\") pod \"whisker-6f6db8d685-8fpst\" (UID: \"6168d23d-915e-4571-a018-ac172eb9454e\") " pod="calico-system/whisker-6f6db8d685-8fpst" May 27 17:04:51.153032 kubelet[3382]: I0527 17:04:51.152575 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ed3902b-f80a-4d5d-a3a8-c425965d5219-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-2hnhp\" (UID: \"1ed3902b-f80a-4d5d-a3a8-c425965d5219\") " pod="calico-system/goldmane-78d55f7ddc-2hnhp" May 27 17:04:51.153032 kubelet[3382]: I0527 17:04:51.152590 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6168d23d-915e-4571-a018-ac172eb9454e-whisker-ca-bundle\") pod \"whisker-6f6db8d685-8fpst\" (UID: \"6168d23d-915e-4571-a018-ac172eb9454e\") " pod="calico-system/whisker-6f6db8d685-8fpst" May 27 17:04:51.153032 kubelet[3382]: I0527 17:04:51.152601 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbgtl\" (UniqueName: \"kubernetes.io/projected/6168d23d-915e-4571-a018-ac172eb9454e-kube-api-access-bbgtl\") pod \"whisker-6f6db8d685-8fpst\" (UID: \"6168d23d-915e-4571-a018-ac172eb9454e\") " pod="calico-system/whisker-6f6db8d685-8fpst" May 27 17:04:51.153032 kubelet[3382]: I0527 17:04:51.152613 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30f36697-5f7e-4ded-934b-1780a0c861a4-config-volume\") pod \"coredns-674b8bbfcf-xwp7r\" (UID: \"30f36697-5f7e-4ded-934b-1780a0c861a4\") " pod="kube-system/coredns-674b8bbfcf-xwp7r" May 27 17:04:51.153656 kubelet[3382]: I0527 17:04:51.152631 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1ed3902b-f80a-4d5d-a3a8-c425965d5219-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-2hnhp\" (UID: \"1ed3902b-f80a-4d5d-a3a8-c425965d5219\") " pod="calico-system/goldmane-78d55f7ddc-2hnhp" May 27 17:04:51.153656 kubelet[3382]: I0527 17:04:51.152641 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjhsn\" (UniqueName: \"kubernetes.io/projected/30f36697-5f7e-4ded-934b-1780a0c861a4-kube-api-access-gjhsn\") pod \"coredns-674b8bbfcf-xwp7r\" (UID: \"30f36697-5f7e-4ded-934b-1780a0c861a4\") " pod="kube-system/coredns-674b8bbfcf-xwp7r" May 27 17:04:51.153656 kubelet[3382]: I0527 17:04:51.152657 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f0ca896-7cec-4519-bde0-47ca41dbc403-tigera-ca-bundle\") pod \"calico-kube-controllers-58544bdc56-m6766\" (UID: \"0f0ca896-7cec-4519-bde0-47ca41dbc403\") " pod="calico-system/calico-kube-controllers-58544bdc56-m6766" May 27 17:04:51.153656 kubelet[3382]: I0527 17:04:51.152672 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzlzc\" (UniqueName: \"kubernetes.io/projected/840ee727-191a-49f8-8e10-04e6abda410d-kube-api-access-kzlzc\") pod \"calico-apiserver-68cd4d8cbd-298nw\" (UID: \"840ee727-191a-49f8-8e10-04e6abda410d\") " pod="calico-apiserver/calico-apiserver-68cd4d8cbd-298nw" May 27 17:04:51.153656 kubelet[3382]: I0527 17:04:51.152687 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/840ee727-191a-49f8-8e10-04e6abda410d-calico-apiserver-certs\") pod \"calico-apiserver-68cd4d8cbd-298nw\" (UID: \"840ee727-191a-49f8-8e10-04e6abda410d\") " pod="calico-apiserver/calico-apiserver-68cd4d8cbd-298nw" May 27 17:04:51.153740 kubelet[3382]: I0527 17:04:51.152719 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w25ck\" (UniqueName: \"kubernetes.io/projected/0f0ca896-7cec-4519-bde0-47ca41dbc403-kube-api-access-w25ck\") pod \"calico-kube-controllers-58544bdc56-m6766\" (UID: \"0f0ca896-7cec-4519-bde0-47ca41dbc403\") " pod="calico-system/calico-kube-controllers-58544bdc56-m6766" May 27 17:04:51.153740 kubelet[3382]: I0527 17:04:51.152729 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ed3902b-f80a-4d5d-a3a8-c425965d5219-config\") pod \"goldmane-78d55f7ddc-2hnhp\" (UID: \"1ed3902b-f80a-4d5d-a3a8-c425965d5219\") " pod="calico-system/goldmane-78d55f7ddc-2hnhp" May 27 17:04:51.153740 kubelet[3382]: I0527 17:04:51.152739 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lprff\" (UniqueName: \"kubernetes.io/projected/1ed3902b-f80a-4d5d-a3a8-c425965d5219-kube-api-access-lprff\") pod \"goldmane-78d55f7ddc-2hnhp\" (UID: \"1ed3902b-f80a-4d5d-a3a8-c425965d5219\") " pod="calico-system/goldmane-78d55f7ddc-2hnhp" May 27 17:04:51.153740 kubelet[3382]: I0527 17:04:51.152748 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e4be9e5-106a-4380-9190-17d1402c83e7-config-volume\") pod \"coredns-674b8bbfcf-r6nxw\" (UID: \"9e4be9e5-106a-4380-9190-17d1402c83e7\") " pod="kube-system/coredns-674b8bbfcf-r6nxw" May 27 17:04:51.153740 kubelet[3382]: I0527 17:04:51.152760 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr5fl\" (UniqueName: \"kubernetes.io/projected/9e4be9e5-106a-4380-9190-17d1402c83e7-kube-api-access-cr5fl\") pod \"coredns-674b8bbfcf-r6nxw\" (UID: \"9e4be9e5-106a-4380-9190-17d1402c83e7\") " pod="kube-system/coredns-674b8bbfcf-r6nxw" May 27 17:04:51.158208 systemd[1]: Created slice kubepods-besteffort-pod840ee727_191a_49f8_8e10_04e6abda410d.slice - libcontainer container kubepods-besteffort-pod840ee727_191a_49f8_8e10_04e6abda410d.slice. May 27 17:04:51.202245 update_engine[1852]: I20250527 17:04:51.201097 1852 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 27 17:04:51.202245 update_engine[1852]: I20250527 17:04:51.201148 1852 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 27 17:04:51.202245 update_engine[1852]: I20250527 17:04:51.201323 1852 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 27 17:04:51.203049 update_engine[1852]: I20250527 17:04:51.202975 1852 omaha_request_params.cc:62] Current group set to alpha May 27 17:04:51.203141 update_engine[1852]: I20250527 17:04:51.203123 1852 update_attempter.cc:499] Already updated boot flags. Skipping. May 27 17:04:51.203141 update_engine[1852]: I20250527 17:04:51.203135 1852 update_attempter.cc:643] Scheduling an action processor start. May 27 17:04:51.203180 update_engine[1852]: I20250527 17:04:51.203153 1852 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 27 17:04:51.203196 update_engine[1852]: I20250527 17:04:51.203184 1852 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 27 17:04:51.203251 update_engine[1852]: I20250527 17:04:51.203238 1852 omaha_request_action.cc:271] Posting an Omaha request to disabled May 27 17:04:51.203251 update_engine[1852]: I20250527 17:04:51.203246 1852 omaha_request_action.cc:272] Request: May 27 17:04:51.203251 update_engine[1852]: May 27 17:04:51.203251 update_engine[1852]: May 27 17:04:51.203251 update_engine[1852]: May 27 17:04:51.203251 update_engine[1852]: May 27 17:04:51.203251 update_engine[1852]: May 27 17:04:51.203251 update_engine[1852]: May 27 17:04:51.203251 update_engine[1852]: May 27 17:04:51.203251 update_engine[1852]: May 27 17:04:51.203375 update_engine[1852]: I20250527 17:04:51.203251 1852 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 17:04:51.204190 update_engine[1852]: I20250527 17:04:51.204157 1852 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 17:04:51.204440 locksmithd[2005]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 27 17:04:51.204635 update_engine[1852]: I20250527 17:04:51.204526 1852 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 17:04:51.253735 kubelet[3382]: I0527 17:04:51.253685 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3da35495-9316-4296-a7ef-679ef4f4be92-calico-apiserver-certs\") pod \"calico-apiserver-68cd4d8cbd-vbsh5\" (UID: \"3da35495-9316-4296-a7ef-679ef4f4be92\") " pod="calico-apiserver/calico-apiserver-68cd4d8cbd-vbsh5" May 27 17:04:51.253735 kubelet[3382]: I0527 17:04:51.253751 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmxbn\" (UniqueName: \"kubernetes.io/projected/3da35495-9316-4296-a7ef-679ef4f4be92-kube-api-access-xmxbn\") pod \"calico-apiserver-68cd4d8cbd-vbsh5\" (UID: \"3da35495-9316-4296-a7ef-679ef4f4be92\") " pod="calico-apiserver/calico-apiserver-68cd4d8cbd-vbsh5" May 27 17:04:51.280541 update_engine[1852]: E20250527 17:04:51.280473 1852 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 17:04:51.280789 update_engine[1852]: I20250527 17:04:51.280575 1852 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 27 17:04:51.304536 containerd[1871]: time="2025-05-27T17:04:51.304491066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 17:04:51.378199 containerd[1871]: time="2025-05-27T17:04:51.378155125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r6nxw,Uid:9e4be9e5-106a-4380-9190-17d1402c83e7,Namespace:kube-system,Attempt:0,}" May 27 17:04:51.416247 containerd[1871]: time="2025-05-27T17:04:51.416201970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f6db8d685-8fpst,Uid:6168d23d-915e-4571-a018-ac172eb9454e,Namespace:calico-system,Attempt:0,}" May 27 17:04:51.424473 containerd[1871]: time="2025-05-27T17:04:51.424304563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58544bdc56-m6766,Uid:0f0ca896-7cec-4519-bde0-47ca41dbc403,Namespace:calico-system,Attempt:0,}" May 27 17:04:51.438863 containerd[1871]: time="2025-05-27T17:04:51.438820541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-2hnhp,Uid:1ed3902b-f80a-4d5d-a3a8-c425965d5219,Namespace:calico-system,Attempt:0,}" May 27 17:04:51.446408 containerd[1871]: time="2025-05-27T17:04:51.446116862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xwp7r,Uid:30f36697-5f7e-4ded-934b-1780a0c861a4,Namespace:kube-system,Attempt:0,}" May 27 17:04:51.455308 containerd[1871]: time="2025-05-27T17:04:51.455261487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cd4d8cbd-vbsh5,Uid:3da35495-9316-4296-a7ef-679ef4f4be92,Namespace:calico-apiserver,Attempt:0,}" May 27 17:04:51.461546 containerd[1871]: time="2025-05-27T17:04:51.461485128Z" level=error msg="Failed to destroy network for sandbox \"5c13a1c1456d067a27ef611e79030ec9edd8d667a7dbc4e2cb83b32fbbbe276c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:51.463967 containerd[1871]: time="2025-05-27T17:04:51.463933089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cd4d8cbd-298nw,Uid:840ee727-191a-49f8-8e10-04e6abda410d,Namespace:calico-apiserver,Attempt:0,}" May 27 17:04:51.490985 containerd[1871]: time="2025-05-27T17:04:51.490806357Z" level=error msg="Failed to destroy network for sandbox \"97b610c64ce42c73393bc71a9cf1751f9d759d8a800b4761eed55a80a0c5be6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:51.497601 containerd[1871]: time="2025-05-27T17:04:51.497529629Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r6nxw,Uid:9e4be9e5-106a-4380-9190-17d1402c83e7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c13a1c1456d067a27ef611e79030ec9edd8d667a7dbc4e2cb83b32fbbbe276c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:51.498085 kubelet[3382]: E0527 17:04:51.497798 3382 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c13a1c1456d067a27ef611e79030ec9edd8d667a7dbc4e2cb83b32fbbbe276c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:51.498085 kubelet[3382]: E0527 17:04:51.497879 3382 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c13a1c1456d067a27ef611e79030ec9edd8d667a7dbc4e2cb83b32fbbbe276c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-r6nxw" May 27 17:04:51.498085 kubelet[3382]: E0527 17:04:51.497901 3382 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c13a1c1456d067a27ef611e79030ec9edd8d667a7dbc4e2cb83b32fbbbe276c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-r6nxw" May 27 17:04:51.498774 kubelet[3382]: E0527 17:04:51.498409 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-r6nxw_kube-system(9e4be9e5-106a-4380-9190-17d1402c83e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-r6nxw_kube-system(9e4be9e5-106a-4380-9190-17d1402c83e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c13a1c1456d067a27ef611e79030ec9edd8d667a7dbc4e2cb83b32fbbbe276c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-r6nxw" podUID="9e4be9e5-106a-4380-9190-17d1402c83e7" May 27 17:04:51.520344 containerd[1871]: time="2025-05-27T17:04:51.520125449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f6db8d685-8fpst,Uid:6168d23d-915e-4571-a018-ac172eb9454e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b610c64ce42c73393bc71a9cf1751f9d759d8a800b4761eed55a80a0c5be6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:51.520505 kubelet[3382]: E0527 17:04:51.520374 3382 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b610c64ce42c73393bc71a9cf1751f9d759d8a800b4761eed55a80a0c5be6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:51.520505 kubelet[3382]: E0527 17:04:51.520437 3382 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b610c64ce42c73393bc71a9cf1751f9d759d8a800b4761eed55a80a0c5be6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f6db8d685-8fpst" May 27 17:04:51.520505 kubelet[3382]: E0527 17:04:51.520456 3382 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b610c64ce42c73393bc71a9cf1751f9d759d8a800b4761eed55a80a0c5be6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f6db8d685-8fpst" May 27 17:04:51.520590 kubelet[3382]: E0527 17:04:51.520519 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f6db8d685-8fpst_calico-system(6168d23d-915e-4571-a018-ac172eb9454e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f6db8d685-8fpst_calico-system(6168d23d-915e-4571-a018-ac172eb9454e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97b610c64ce42c73393bc71a9cf1751f9d759d8a800b4761eed55a80a0c5be6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f6db8d685-8fpst" podUID="6168d23d-915e-4571-a018-ac172eb9454e" May 27 17:04:51.579512 containerd[1871]: time="2025-05-27T17:04:51.579402121Z" level=error msg="Failed to destroy network for sandbox \"33229060436724e447377a91fd084fae7a93d4c29107c41801695f45c395966a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:51.583512 containerd[1871]: time="2025-05-27T17:04:51.583363673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58544bdc56-m6766,Uid:0f0ca896-7cec-4519-bde0-47ca41dbc403,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"33229060436724e447377a91fd084fae7a93d4c29107c41801695f45c395966a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:51.584113 kubelet[3382]: E0527 17:04:51.584057 3382 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33229060436724e447377a91fd084fae7a93d4c29107c41801695f45c395966a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:51.584217 kubelet[3382]: E0527 17:04:51.584124 3382 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33229060436724e447377a91fd084fae7a93d4c29107c41801695f45c395966a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58544bdc56-m6766" May 27 17:04:51.584217 kubelet[3382]: E0527 17:04:51.584141 3382 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33229060436724e447377a91fd084fae7a93d4c29107c41801695f45c395966a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58544bdc56-m6766" May 27 17:04:51.584217 kubelet[3382]: E0527 17:04:51.584194 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58544bdc56-m6766_calico-system(0f0ca896-7cec-4519-bde0-47ca41dbc403)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58544bdc56-m6766_calico-system(0f0ca896-7cec-4519-bde0-47ca41dbc403)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33229060436724e447377a91fd084fae7a93d4c29107c41801695f45c395966a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58544bdc56-m6766" podUID="0f0ca896-7cec-4519-bde0-47ca41dbc403" May 27 17:04:51.609963 containerd[1871]: time="2025-05-27T17:04:51.609860005Z" level=error msg="Failed to destroy network for sandbox \"6e8d56489e7f89d8b4907ba768a607e9fdb61d07efb8cbe2200a5ac66d27e0bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:51.619159 containerd[1871]: time="2025-05-27T17:04:51.619110926Z" level=error msg="Failed to destroy network for sandbox \"107cd8e4030ae958d278043cb9ad6dc7b8697780da5346244c9b226dfc348f0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:51.620448 containerd[1871]: time="2025-05-27T17:04:51.620407687Z" level=error msg="Failed to destroy network for sandbox \"bba3280a4486d30d24e9239db82b449f72180452f5e7bfab4d1add2fccc7bd17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:51.623630 containerd[1871]: time="2025-05-27T17:04:51.623578543Z" level=error msg="Failed to destroy network for sandbox \"be238684af0519fb50d94b6ff82ce6e5862b9ff49d3591666037c0ec47da3d53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:51.635926 containerd[1871]: time="2025-05-27T17:04:51.635306881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xwp7r,Uid:30f36697-5f7e-4ded-934b-1780a0c861a4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e8d56489e7f89d8b4907ba768a607e9fdb61d07efb8cbe2200a5ac66d27e0bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:51.636158 kubelet[3382]: E0527 17:04:51.635559 3382 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e8d56489e7f89d8b4907ba768a607e9fdb61d07efb8cbe2200a5ac66d27e0bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:51.636158 kubelet[3382]: E0527 17:04:51.635615 3382 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e8d56489e7f89d8b4907ba768a607e9fdb61d07efb8cbe2200a5ac66d27e0bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xwp7r" May 27 17:04:51.636158 kubelet[3382]: E0527 17:04:51.635633 3382 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e8d56489e7f89d8b4907ba768a607e9fdb61d07efb8cbe2200a5ac66d27e0bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xwp7r" May 27 17:04:51.636239 kubelet[3382]: E0527 17:04:51.635682 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xwp7r_kube-system(30f36697-5f7e-4ded-934b-1780a0c861a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xwp7r_kube-system(30f36697-5f7e-4ded-934b-1780a0c861a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e8d56489e7f89d8b4907ba768a607e9fdb61d07efb8cbe2200a5ac66d27e0bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xwp7r" podUID="30f36697-5f7e-4ded-934b-1780a0c861a4" May 27 17:04:51.641100 containerd[1871]: time="2025-05-27T17:04:51.640975698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cd4d8cbd-298nw,Uid:840ee727-191a-49f8-8e10-04e6abda410d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"107cd8e4030ae958d278043cb9ad6dc7b8697780da5346244c9b226dfc348f0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:51.641493 kubelet[3382]: E0527 17:04:51.641443 3382 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"107cd8e4030ae958d278043cb9ad6dc7b8697780da5346244c9b226dfc348f0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:51.641588 kubelet[3382]: E0527 17:04:51.641507 3382 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"107cd8e4030ae958d278043cb9ad6dc7b8697780da5346244c9b226dfc348f0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68cd4d8cbd-298nw" May 27 17:04:51.641588 kubelet[3382]: E0527 17:04:51.641530 3382 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"107cd8e4030ae958d278043cb9ad6dc7b8697780da5346244c9b226dfc348f0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68cd4d8cbd-298nw" May 27 17:04:51.641651 kubelet[3382]: E0527 17:04:51.641579 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68cd4d8cbd-298nw_calico-apiserver(840ee727-191a-49f8-8e10-04e6abda410d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68cd4d8cbd-298nw_calico-apiserver(840ee727-191a-49f8-8e10-04e6abda410d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"107cd8e4030ae958d278043cb9ad6dc7b8697780da5346244c9b226dfc348f0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68cd4d8cbd-298nw" podUID="840ee727-191a-49f8-8e10-04e6abda410d" May 27 17:04:51.646279 containerd[1871]: time="2025-05-27T17:04:51.645289890Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-2hnhp,Uid:1ed3902b-f80a-4d5d-a3a8-c425965d5219,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bba3280a4486d30d24e9239db82b449f72180452f5e7bfab4d1add2fccc7bd17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:51.646618 kubelet[3382]: E0527 17:04:51.646576 3382 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bba3280a4486d30d24e9239db82b449f72180452f5e7bfab4d1add2fccc7bd17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:51.646882 kubelet[3382]: E0527 17:04:51.646723 3382 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bba3280a4486d30d24e9239db82b449f72180452f5e7bfab4d1add2fccc7bd17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-2hnhp" May 27 17:04:51.646882 kubelet[3382]: E0527 17:04:51.646745 3382 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bba3280a4486d30d24e9239db82b449f72180452f5e7bfab4d1add2fccc7bd17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-2hnhp" May 27 17:04:51.647081 kubelet[3382]: E0527 17:04:51.647054 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-2hnhp_calico-system(1ed3902b-f80a-4d5d-a3a8-c425965d5219)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-2hnhp_calico-system(1ed3902b-f80a-4d5d-a3a8-c425965d5219)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bba3280a4486d30d24e9239db82b449f72180452f5e7bfab4d1add2fccc7bd17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-2hnhp" podUID="1ed3902b-f80a-4d5d-a3a8-c425965d5219" May 27 17:04:51.649292 containerd[1871]: time="2025-05-27T17:04:51.649235579Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cd4d8cbd-vbsh5,Uid:3da35495-9316-4296-a7ef-679ef4f4be92,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be238684af0519fb50d94b6ff82ce6e5862b9ff49d3591666037c0ec47da3d53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:51.649690 kubelet[3382]: E0527 17:04:51.649447 3382 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be238684af0519fb50d94b6ff82ce6e5862b9ff49d3591666037c0ec47da3d53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:51.649690 kubelet[3382]: E0527 17:04:51.649487 3382 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be238684af0519fb50d94b6ff82ce6e5862b9ff49d3591666037c0ec47da3d53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68cd4d8cbd-vbsh5" May 27 17:04:51.649690 kubelet[3382]: E0527 17:04:51.649500 3382 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be238684af0519fb50d94b6ff82ce6e5862b9ff49d3591666037c0ec47da3d53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68cd4d8cbd-vbsh5" May 27 17:04:51.649776 kubelet[3382]: E0527 17:04:51.649536 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68cd4d8cbd-vbsh5_calico-apiserver(3da35495-9316-4296-a7ef-679ef4f4be92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68cd4d8cbd-vbsh5_calico-apiserver(3da35495-9316-4296-a7ef-679ef4f4be92)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be238684af0519fb50d94b6ff82ce6e5862b9ff49d3591666037c0ec47da3d53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68cd4d8cbd-vbsh5" podUID="3da35495-9316-4296-a7ef-679ef4f4be92" May 27 17:04:52.164262 systemd[1]: Created slice kubepods-besteffort-pod9cad5d59_f409_4303_a7b2_86efde3e9bab.slice - libcontainer container kubepods-besteffort-pod9cad5d59_f409_4303_a7b2_86efde3e9bab.slice. May 27 17:04:52.167385 containerd[1871]: time="2025-05-27T17:04:52.167343427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tmb64,Uid:9cad5d59-f409-4303-a7b2-86efde3e9bab,Namespace:calico-system,Attempt:0,}" May 27 17:04:52.221940 containerd[1871]: time="2025-05-27T17:04:52.221871539Z" level=error msg="Failed to destroy network for sandbox \"e59441b86f0433ba3b07952ef670c048486ca5a0e4b3d89d3687c94c86c77609\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:52.228294 containerd[1871]: time="2025-05-27T17:04:52.228210924Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tmb64,Uid:9cad5d59-f409-4303-a7b2-86efde3e9bab,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e59441b86f0433ba3b07952ef670c048486ca5a0e4b3d89d3687c94c86c77609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:52.228764 kubelet[3382]: E0527 17:04:52.228718 3382 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e59441b86f0433ba3b07952ef670c048486ca5a0e4b3d89d3687c94c86c77609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:04:52.228838 kubelet[3382]: E0527 17:04:52.228788 3382 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e59441b86f0433ba3b07952ef670c048486ca5a0e4b3d89d3687c94c86c77609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tmb64" May 27 17:04:52.228838 kubelet[3382]: E0527 17:04:52.228808 3382 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e59441b86f0433ba3b07952ef670c048486ca5a0e4b3d89d3687c94c86c77609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tmb64" May 27 17:04:52.228891 kubelet[3382]: E0527 17:04:52.228869 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tmb64_calico-system(9cad5d59-f409-4303-a7b2-86efde3e9bab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tmb64_calico-system(9cad5d59-f409-4303-a7b2-86efde3e9bab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e59441b86f0433ba3b07952ef670c048486ca5a0e4b3d89d3687c94c86c77609\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tmb64" podUID="9cad5d59-f409-4303-a7b2-86efde3e9bab" May 27 17:04:59.746404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3504262027.mount: Deactivated successfully. May 27 17:05:00.833186 kubelet[3382]: I0527 17:05:00.833131 3382 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:05:01.208078 update_engine[1852]: I20250527 17:05:01.207486 1852 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 17:05:01.208078 update_engine[1852]: I20250527 17:05:01.207739 1852 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 17:05:01.208472 update_engine[1852]: I20250527 17:05:01.208134 1852 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 17:05:01.310956 update_engine[1852]: E20250527 17:05:01.310880 1852 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 17:05:01.311143 update_engine[1852]: I20250527 17:05:01.310973 1852 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 27 17:05:03.159912 containerd[1871]: time="2025-05-27T17:05:03.159860089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tmb64,Uid:9cad5d59-f409-4303-a7b2-86efde3e9bab,Namespace:calico-system,Attempt:0,}" May 27 17:05:03.160321 containerd[1871]: time="2025-05-27T17:05:03.160044345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58544bdc56-m6766,Uid:0f0ca896-7cec-4519-bde0-47ca41dbc403,Namespace:calico-system,Attempt:0,}" May 27 17:05:04.624092 containerd[1871]: time="2025-05-27T17:05:04.160239167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cd4d8cbd-298nw,Uid:840ee727-191a-49f8-8e10-04e6abda410d,Namespace:calico-apiserver,Attempt:0,}" May 27 17:05:04.624092 containerd[1871]: time="2025-05-27T17:05:04.160510415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f6db8d685-8fpst,Uid:6168d23d-915e-4571-a018-ac172eb9454e,Namespace:calico-system,Attempt:0,}" May 27 17:05:04.718012 containerd[1871]: time="2025-05-27T17:05:04.717709200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:04.723973 containerd[1871]: time="2025-05-27T17:05:04.723895817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=150465379" May 27 17:05:04.729019 containerd[1871]: time="2025-05-27T17:05:04.727674385Z" level=info msg="ImageCreate event name:\"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:04.741072 containerd[1871]: time="2025-05-27T17:05:04.741018915Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:04.741390 containerd[1871]: time="2025-05-27T17:05:04.741363955Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"150465241\" in 13.436821721s" May 27 17:05:04.741499 containerd[1871]: time="2025-05-27T17:05:04.741483755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\"" May 27 17:05:04.771316 containerd[1871]: time="2025-05-27T17:05:04.771272239Z" level=info msg="CreateContainer within sandbox \"da92eb399584ad62e4b70379b6e8959ee2754ea357b857e3fc8407cd36d038c9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 17:05:04.781699 containerd[1871]: time="2025-05-27T17:05:04.781640920Z" level=error msg="Failed to destroy network for sandbox \"070a17f7982971d06ed589d38617b94861d4cdfb8b19031594f2ca1d570aea07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:04.796399 containerd[1871]: time="2025-05-27T17:05:04.796193402Z" level=error msg="Failed to destroy network for sandbox \"2201f2ba7ec35a3b20366f7c17edd04805bc2a841b45c11786df2917bdc7dfd6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:04.798451 containerd[1871]: time="2025-05-27T17:05:04.798308914Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58544bdc56-m6766,Uid:0f0ca896-7cec-4519-bde0-47ca41dbc403,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"070a17f7982971d06ed589d38617b94861d4cdfb8b19031594f2ca1d570aea07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:04.802921 containerd[1871]: time="2025-05-27T17:05:04.802857274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tmb64,Uid:9cad5d59-f409-4303-a7b2-86efde3e9bab,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2201f2ba7ec35a3b20366f7c17edd04805bc2a841b45c11786df2917bdc7dfd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:04.803465 kubelet[3382]: E0527 17:05:04.803421 3382 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"070a17f7982971d06ed589d38617b94861d4cdfb8b19031594f2ca1d570aea07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:04.803789 kubelet[3382]: E0527 17:05:04.803505 3382 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"070a17f7982971d06ed589d38617b94861d4cdfb8b19031594f2ca1d570aea07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58544bdc56-m6766" May 27 17:05:04.803789 kubelet[3382]: E0527 17:05:04.803525 3382 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"070a17f7982971d06ed589d38617b94861d4cdfb8b19031594f2ca1d570aea07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58544bdc56-m6766" May 27 17:05:04.803789 kubelet[3382]: E0527 17:05:04.803570 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58544bdc56-m6766_calico-system(0f0ca896-7cec-4519-bde0-47ca41dbc403)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58544bdc56-m6766_calico-system(0f0ca896-7cec-4519-bde0-47ca41dbc403)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"070a17f7982971d06ed589d38617b94861d4cdfb8b19031594f2ca1d570aea07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58544bdc56-m6766" podUID="0f0ca896-7cec-4519-bde0-47ca41dbc403" May 27 17:05:04.806215 kubelet[3382]: E0527 17:05:04.806165 3382 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2201f2ba7ec35a3b20366f7c17edd04805bc2a841b45c11786df2917bdc7dfd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:04.806643 kubelet[3382]: E0527 17:05:04.806226 3382 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2201f2ba7ec35a3b20366f7c17edd04805bc2a841b45c11786df2917bdc7dfd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tmb64" May 27 17:05:04.806643 kubelet[3382]: E0527 17:05:04.806243 3382 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2201f2ba7ec35a3b20366f7c17edd04805bc2a841b45c11786df2917bdc7dfd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tmb64" May 27 17:05:04.806643 kubelet[3382]: E0527 17:05:04.806283 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tmb64_calico-system(9cad5d59-f409-4303-a7b2-86efde3e9bab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tmb64_calico-system(9cad5d59-f409-4303-a7b2-86efde3e9bab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2201f2ba7ec35a3b20366f7c17edd04805bc2a841b45c11786df2917bdc7dfd6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tmb64" podUID="9cad5d59-f409-4303-a7b2-86efde3e9bab" May 27 17:05:04.824586 containerd[1871]: time="2025-05-27T17:05:04.824188637Z" level=error msg="Failed to destroy network for sandbox \"8ce4606d5557865195d0e6f2322f57ac71a2b06b1f000a1a7d1e6c1c706d3b75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:04.824980 containerd[1871]: time="2025-05-27T17:05:04.824943661Z" level=error msg="Failed to destroy network for sandbox \"a7e1615f7348ccedc0bb54d0e57407176e9a94247035d6aa87c3c469bfbae050\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:04.826911 containerd[1871]: time="2025-05-27T17:05:04.826872389Z" level=info msg="Container 319ebaf0c1336151458e2dd1743b02a13cff1f6ebf5da73c6b194e0546c2693f: CDI devices from CRI Config.CDIDevices: []" May 27 17:05:04.831658 containerd[1871]: time="2025-05-27T17:05:04.831605086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f6db8d685-8fpst,Uid:6168d23d-915e-4571-a018-ac172eb9454e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ce4606d5557865195d0e6f2322f57ac71a2b06b1f000a1a7d1e6c1c706d3b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:04.832263 kubelet[3382]: E0527 17:05:04.832209 3382 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ce4606d5557865195d0e6f2322f57ac71a2b06b1f000a1a7d1e6c1c706d3b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:04.832363 kubelet[3382]: E0527 17:05:04.832280 3382 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ce4606d5557865195d0e6f2322f57ac71a2b06b1f000a1a7d1e6c1c706d3b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f6db8d685-8fpst" May 27 17:05:04.832363 kubelet[3382]: E0527 17:05:04.832302 3382 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ce4606d5557865195d0e6f2322f57ac71a2b06b1f000a1a7d1e6c1c706d3b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f6db8d685-8fpst" May 27 17:05:04.832414 kubelet[3382]: E0527 17:05:04.832363 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f6db8d685-8fpst_calico-system(6168d23d-915e-4571-a018-ac172eb9454e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f6db8d685-8fpst_calico-system(6168d23d-915e-4571-a018-ac172eb9454e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ce4606d5557865195d0e6f2322f57ac71a2b06b1f000a1a7d1e6c1c706d3b75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f6db8d685-8fpst" podUID="6168d23d-915e-4571-a018-ac172eb9454e" May 27 17:05:04.840908 containerd[1871]: time="2025-05-27T17:05:04.840849431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cd4d8cbd-298nw,Uid:840ee727-191a-49f8-8e10-04e6abda410d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e1615f7348ccedc0bb54d0e57407176e9a94247035d6aa87c3c469bfbae050\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:04.841476 kubelet[3382]: E0527 17:05:04.841440 3382 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e1615f7348ccedc0bb54d0e57407176e9a94247035d6aa87c3c469bfbae050\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:04.841647 kubelet[3382]: E0527 17:05:04.841608 3382 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e1615f7348ccedc0bb54d0e57407176e9a94247035d6aa87c3c469bfbae050\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68cd4d8cbd-298nw" May 27 17:05:04.841781 kubelet[3382]: E0527 17:05:04.841706 3382 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e1615f7348ccedc0bb54d0e57407176e9a94247035d6aa87c3c469bfbae050\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68cd4d8cbd-298nw" May 27 17:05:04.841850 kubelet[3382]: E0527 17:05:04.841770 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68cd4d8cbd-298nw_calico-apiserver(840ee727-191a-49f8-8e10-04e6abda410d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68cd4d8cbd-298nw_calico-apiserver(840ee727-191a-49f8-8e10-04e6abda410d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7e1615f7348ccedc0bb54d0e57407176e9a94247035d6aa87c3c469bfbae050\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68cd4d8cbd-298nw" podUID="840ee727-191a-49f8-8e10-04e6abda410d" May 27 17:05:04.867251 containerd[1871]: time="2025-05-27T17:05:04.867159274Z" level=info msg="CreateContainer within sandbox \"da92eb399584ad62e4b70379b6e8959ee2754ea357b857e3fc8407cd36d038c9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"319ebaf0c1336151458e2dd1743b02a13cff1f6ebf5da73c6b194e0546c2693f\"" May 27 17:05:04.868516 containerd[1871]: time="2025-05-27T17:05:04.868428954Z" level=info msg="StartContainer for \"319ebaf0c1336151458e2dd1743b02a13cff1f6ebf5da73c6b194e0546c2693f\"" May 27 17:05:04.870332 containerd[1871]: time="2025-05-27T17:05:04.870225754Z" level=info msg="connecting to shim 319ebaf0c1336151458e2dd1743b02a13cff1f6ebf5da73c6b194e0546c2693f" address="unix:///run/containerd/s/28369acd563e0181034220caf39a34d0c20057a2a22191e19d2b66fb9fb78aa4" protocol=ttrpc version=3 May 27 17:05:04.895179 systemd[1]: Started cri-containerd-319ebaf0c1336151458e2dd1743b02a13cff1f6ebf5da73c6b194e0546c2693f.scope - libcontainer container 319ebaf0c1336151458e2dd1743b02a13cff1f6ebf5da73c6b194e0546c2693f. May 27 17:05:04.936027 containerd[1871]: time="2025-05-27T17:05:04.935225210Z" level=info msg="StartContainer for \"319ebaf0c1336151458e2dd1743b02a13cff1f6ebf5da73c6b194e0546c2693f\" returns successfully" May 27 17:05:05.159625 containerd[1871]: time="2025-05-27T17:05:05.159499837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r6nxw,Uid:9e4be9e5-106a-4380-9190-17d1402c83e7,Namespace:kube-system,Attempt:0,}" May 27 17:05:05.208111 containerd[1871]: time="2025-05-27T17:05:05.208051954Z" level=error msg="Failed to destroy network for sandbox \"54ecab01725cfdae2765b00e80c6b289e28a9d45498118ae2fd4b3beaeb21d5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:05.212923 containerd[1871]: time="2025-05-27T17:05:05.212642539Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r6nxw,Uid:9e4be9e5-106a-4380-9190-17d1402c83e7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"54ecab01725cfdae2765b00e80c6b289e28a9d45498118ae2fd4b3beaeb21d5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:05.213924 kubelet[3382]: E0527 17:05:05.213801 3382 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54ecab01725cfdae2765b00e80c6b289e28a9d45498118ae2fd4b3beaeb21d5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:05:05.213924 kubelet[3382]: E0527 17:05:05.213859 3382 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54ecab01725cfdae2765b00e80c6b289e28a9d45498118ae2fd4b3beaeb21d5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-r6nxw" May 27 17:05:05.216025 kubelet[3382]: E0527 17:05:05.215525 3382 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54ecab01725cfdae2765b00e80c6b289e28a9d45498118ae2fd4b3beaeb21d5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-r6nxw" May 27 17:05:05.216025 kubelet[3382]: E0527 17:05:05.215615 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-r6nxw_kube-system(9e4be9e5-106a-4380-9190-17d1402c83e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-r6nxw_kube-system(9e4be9e5-106a-4380-9190-17d1402c83e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54ecab01725cfdae2765b00e80c6b289e28a9d45498118ae2fd4b3beaeb21d5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-r6nxw" podUID="9e4be9e5-106a-4380-9190-17d1402c83e7" May 27 17:05:05.305649 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 17:05:05.305787 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 17:05:05.371413 kubelet[3382]: I0527 17:05:05.371337 3382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-56nqq" podStartSLOduration=1.909453206 podStartE2EDuration="27.371318966s" podCreationTimestamp="2025-05-27 17:04:38 +0000 UTC" firstStartedPulling="2025-05-27 17:04:39.280601211 +0000 UTC m=+17.300115448" lastFinishedPulling="2025-05-27 17:05:04.742466971 +0000 UTC m=+42.761981208" observedRunningTime="2025-05-27 17:05:05.369190733 +0000 UTC m=+43.388705002" watchObservedRunningTime="2025-05-27 17:05:05.371318966 +0000 UTC m=+43.390833203" May 27 17:05:05.459736 containerd[1871]: time="2025-05-27T17:05:05.459380120Z" level=info msg="TaskExit event in podsandbox handler container_id:\"319ebaf0c1336151458e2dd1743b02a13cff1f6ebf5da73c6b194e0546c2693f\" id:\"0e5831b15555ae0c3fe89c17d69828f22dfbb0a9b03689761612b5688a2dfda4\" pid:4589 exit_status:1 exited_at:{seconds:1748365505 nanos:459080136}" May 27 17:05:05.541615 kubelet[3382]: I0527 17:05:05.541072 3382 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbgtl\" (UniqueName: \"kubernetes.io/projected/6168d23d-915e-4571-a018-ac172eb9454e-kube-api-access-bbgtl\") pod \"6168d23d-915e-4571-a018-ac172eb9454e\" (UID: \"6168d23d-915e-4571-a018-ac172eb9454e\") " May 27 17:05:05.541615 kubelet[3382]: I0527 17:05:05.541135 3382 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6168d23d-915e-4571-a018-ac172eb9454e-whisker-ca-bundle\") pod \"6168d23d-915e-4571-a018-ac172eb9454e\" (UID: \"6168d23d-915e-4571-a018-ac172eb9454e\") " May 27 17:05:05.541615 kubelet[3382]: I0527 17:05:05.541161 3382 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6168d23d-915e-4571-a018-ac172eb9454e-whisker-backend-key-pair\") pod \"6168d23d-915e-4571-a018-ac172eb9454e\" (UID: \"6168d23d-915e-4571-a018-ac172eb9454e\") " May 27 17:05:05.545668 kubelet[3382]: I0527 17:05:05.545616 3382 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6168d23d-915e-4571-a018-ac172eb9454e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6168d23d-915e-4571-a018-ac172eb9454e" (UID: "6168d23d-915e-4571-a018-ac172eb9454e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 17:05:05.546391 kubelet[3382]: I0527 17:05:05.546353 3382 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6168d23d-915e-4571-a018-ac172eb9454e-kube-api-access-bbgtl" (OuterVolumeSpecName: "kube-api-access-bbgtl") pod "6168d23d-915e-4571-a018-ac172eb9454e" (UID: "6168d23d-915e-4571-a018-ac172eb9454e"). InnerVolumeSpecName "kube-api-access-bbgtl". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:05:05.546608 kubelet[3382]: I0527 17:05:05.546584 3382 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6168d23d-915e-4571-a018-ac172eb9454e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6168d23d-915e-4571-a018-ac172eb9454e" (UID: "6168d23d-915e-4571-a018-ac172eb9454e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 17:05:05.642448 kubelet[3382]: I0527 17:05:05.642400 3382 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bbgtl\" (UniqueName: \"kubernetes.io/projected/6168d23d-915e-4571-a018-ac172eb9454e-kube-api-access-bbgtl\") on node \"ci-4344.0.0-a-910621710e\" DevicePath \"\"" May 27 17:05:05.642448 kubelet[3382]: I0527 17:05:05.642439 3382 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6168d23d-915e-4571-a018-ac172eb9454e-whisker-ca-bundle\") on node \"ci-4344.0.0-a-910621710e\" DevicePath \"\"" May 27 17:05:05.642448 kubelet[3382]: I0527 17:05:05.642450 3382 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6168d23d-915e-4571-a018-ac172eb9454e-whisker-backend-key-pair\") on node \"ci-4344.0.0-a-910621710e\" DevicePath \"\"" May 27 17:05:05.655937 systemd[1]: run-netns-cni\x2de9754b88\x2db960\x2de6be\x2d3895\x2dff7e519b20d8.mount: Deactivated successfully. May 27 17:05:05.656734 systemd[1]: run-netns-cni\x2d684877bb\x2df689\x2d2985\x2d6b82\x2d1753623d60fb.mount: Deactivated successfully. May 27 17:05:05.657048 systemd[1]: run-netns-cni\x2dc0453436\x2d8ff4\x2de4e9\x2d73b8\x2d0fe8bf23e85a.mount: Deactivated successfully. May 27 17:05:05.657200 systemd[1]: run-netns-cni\x2dd8a05c3b\x2de07e\x2df2be\x2d6375\x2d06ce7dd9a626.mount: Deactivated successfully. May 27 17:05:05.657315 systemd[1]: var-lib-kubelet-pods-6168d23d\x2d915e\x2d4571\x2da018\x2dac172eb9454e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbbgtl.mount: Deactivated successfully. May 27 17:05:05.657412 systemd[1]: var-lib-kubelet-pods-6168d23d\x2d915e\x2d4571\x2da018\x2dac172eb9454e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 17:05:06.160292 containerd[1871]: time="2025-05-27T17:05:06.160244786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cd4d8cbd-vbsh5,Uid:3da35495-9316-4296-a7ef-679ef4f4be92,Namespace:calico-apiserver,Attempt:0,}" May 27 17:05:06.161592 containerd[1871]: time="2025-05-27T17:05:06.161561067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xwp7r,Uid:30f36697-5f7e-4ded-934b-1780a0c861a4,Namespace:kube-system,Attempt:0,}" May 27 17:05:06.166610 systemd[1]: Removed slice kubepods-besteffort-pod6168d23d_915e_4571_a018_ac172eb9454e.slice - libcontainer container kubepods-besteffort-pod6168d23d_915e_4571_a018_ac172eb9454e.slice. May 27 17:05:06.302133 systemd-networkd[1692]: califa624d6e8c5: Link UP May 27 17:05:06.302639 systemd-networkd[1692]: califa624d6e8c5: Gained carrier May 27 17:05:06.324313 containerd[1871]: 2025-05-27 17:05:06.204 [INFO][4620] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:05:06.324313 containerd[1871]: 2025-05-27 17:05:06.226 [INFO][4620] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--vbsh5-eth0 calico-apiserver-68cd4d8cbd- calico-apiserver 3da35495-9316-4296-a7ef-679ef4f4be92 844 0 2025-05-27 17:04:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68cd4d8cbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.0.0-a-910621710e calico-apiserver-68cd4d8cbd-vbsh5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califa624d6e8c5 [] [] }} ContainerID="528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62" Namespace="calico-apiserver" Pod="calico-apiserver-68cd4d8cbd-vbsh5" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--vbsh5-" May 27 17:05:06.324313 containerd[1871]: 2025-05-27 17:05:06.226 [INFO][4620] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62" Namespace="calico-apiserver" Pod="calico-apiserver-68cd4d8cbd-vbsh5" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--vbsh5-eth0" May 27 17:05:06.324313 containerd[1871]: 2025-05-27 17:05:06.252 [INFO][4650] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62" HandleID="k8s-pod-network.528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62" Workload="ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--vbsh5-eth0" May 27 17:05:06.324556 containerd[1871]: 2025-05-27 17:05:06.252 [INFO][4650] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62" HandleID="k8s-pod-network.528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62" Workload="ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--vbsh5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d7020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.0.0-a-910621710e", "pod":"calico-apiserver-68cd4d8cbd-vbsh5", "timestamp":"2025-05-27 17:05:06.252706325 +0000 UTC"}, Hostname:"ci-4344.0.0-a-910621710e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:05:06.324556 containerd[1871]: 2025-05-27 17:05:06.252 [INFO][4650] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:05:06.324556 containerd[1871]: 2025-05-27 17:05:06.253 [INFO][4650] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:05:06.324556 containerd[1871]: 2025-05-27 17:05:06.253 [INFO][4650] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-910621710e' May 27 17:05:06.324556 containerd[1871]: 2025-05-27 17:05:06.260 [INFO][4650] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62" host="ci-4344.0.0-a-910621710e" May 27 17:05:06.324556 containerd[1871]: 2025-05-27 17:05:06.265 [INFO][4650] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-910621710e" May 27 17:05:06.324556 containerd[1871]: 2025-05-27 17:05:06.275 [INFO][4650] ipam/ipam.go 511: Trying affinity for 192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:06.324556 containerd[1871]: 2025-05-27 17:05:06.277 [INFO][4650] ipam/ipam.go 158: Attempting to load block cidr=192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:06.324556 containerd[1871]: 2025-05-27 17:05:06.279 [INFO][4650] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:06.324686 containerd[1871]: 2025-05-27 17:05:06.279 [INFO][4650] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.118.128/26 handle="k8s-pod-network.528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62" host="ci-4344.0.0-a-910621710e" May 27 17:05:06.324686 containerd[1871]: 2025-05-27 17:05:06.280 [INFO][4650] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62 May 27 17:05:06.324686 containerd[1871]: 2025-05-27 17:05:06.285 [INFO][4650] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.118.128/26 handle="k8s-pod-network.528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62" host="ci-4344.0.0-a-910621710e" May 27 17:05:06.324686 containerd[1871]: 2025-05-27 17:05:06.291 [INFO][4650] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.118.129/26] block=192.168.118.128/26 handle="k8s-pod-network.528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62" host="ci-4344.0.0-a-910621710e" May 27 17:05:06.324686 containerd[1871]: 2025-05-27 17:05:06.291 [INFO][4650] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.118.129/26] handle="k8s-pod-network.528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62" host="ci-4344.0.0-a-910621710e" May 27 17:05:06.324686 containerd[1871]: 2025-05-27 17:05:06.291 [INFO][4650] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:05:06.324686 containerd[1871]: 2025-05-27 17:05:06.291 [INFO][4650] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.118.129/26] IPv6=[] ContainerID="528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62" HandleID="k8s-pod-network.528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62" Workload="ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--vbsh5-eth0" May 27 17:05:06.324779 containerd[1871]: 2025-05-27 17:05:06.294 [INFO][4620] cni-plugin/k8s.go 418: Populated endpoint ContainerID="528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62" Namespace="calico-apiserver" Pod="calico-apiserver-68cd4d8cbd-vbsh5" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--vbsh5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--vbsh5-eth0", GenerateName:"calico-apiserver-68cd4d8cbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"3da35495-9316-4296-a7ef-679ef4f4be92", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cd4d8cbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-910621710e", ContainerID:"", Pod:"calico-apiserver-68cd4d8cbd-vbsh5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califa624d6e8c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:06.324813 containerd[1871]: 2025-05-27 17:05:06.294 [INFO][4620] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.129/32] ContainerID="528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62" Namespace="calico-apiserver" Pod="calico-apiserver-68cd4d8cbd-vbsh5" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--vbsh5-eth0" May 27 17:05:06.324813 containerd[1871]: 2025-05-27 17:05:06.294 [INFO][4620] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa624d6e8c5 ContainerID="528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62" Namespace="calico-apiserver" Pod="calico-apiserver-68cd4d8cbd-vbsh5" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--vbsh5-eth0" May 27 17:05:06.324813 containerd[1871]: 2025-05-27 17:05:06.303 [INFO][4620] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62" Namespace="calico-apiserver" Pod="calico-apiserver-68cd4d8cbd-vbsh5" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--vbsh5-eth0" May 27 17:05:06.324854 containerd[1871]: 2025-05-27 17:05:06.303 [INFO][4620] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62" Namespace="calico-apiserver" Pod="calico-apiserver-68cd4d8cbd-vbsh5" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--vbsh5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--vbsh5-eth0", GenerateName:"calico-apiserver-68cd4d8cbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"3da35495-9316-4296-a7ef-679ef4f4be92", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cd4d8cbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-910621710e", ContainerID:"528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62", Pod:"calico-apiserver-68cd4d8cbd-vbsh5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califa624d6e8c5", MAC:"96:84:f8:e3:c8:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:06.324890 containerd[1871]: 2025-05-27 17:05:06.321 [INFO][4620] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62" Namespace="calico-apiserver" Pod="calico-apiserver-68cd4d8cbd-vbsh5" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--vbsh5-eth0" May 27 17:05:06.400227 containerd[1871]: time="2025-05-27T17:05:06.400106167Z" level=info msg="connecting to shim 528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62" address="unix:///run/containerd/s/b67ab516570115d39f4bfa6d929237cd7b6de7853aba6bbf302bff703fe30d3b" namespace=k8s.io protocol=ttrpc version=3 May 27 17:05:06.441218 systemd[1]: Started cri-containerd-528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62.scope - libcontainer container 528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62. May 27 17:05:06.513279 containerd[1871]: time="2025-05-27T17:05:06.513175380Z" level=info msg="TaskExit event in podsandbox handler container_id:\"319ebaf0c1336151458e2dd1743b02a13cff1f6ebf5da73c6b194e0546c2693f\" id:\"576c317d4befb64c431b36e97897f2fec8ac6986eb5f85f1cd5d8baf565c4cc5\" pid:4677 exit_status:1 exited_at:{seconds:1748365506 nanos:512699484}" May 27 17:05:07.284241 containerd[1871]: time="2025-05-27T17:05:07.284186111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-2hnhp,Uid:1ed3902b-f80a-4d5d-a3a8-c425965d5219,Namespace:calico-system,Attempt:0,}" May 27 17:05:07.298144 systemd-networkd[1692]: cali13292a953b3: Link UP May 27 17:05:07.304260 systemd-networkd[1692]: cali13292a953b3: Gained carrier May 27 17:05:07.315661 systemd[1]: Created slice kubepods-besteffort-pod2501c128_3fe4_46d7_bc55_201cf393d5e1.slice - libcontainer container kubepods-besteffort-pod2501c128_3fe4_46d7_bc55_201cf393d5e1.slice. May 27 17:05:07.355542 kubelet[3382]: I0527 17:05:07.355460 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2501c128-3fe4-46d7-bc55-201cf393d5e1-whisker-ca-bundle\") pod \"whisker-7f77999cb5-qpzlk\" (UID: \"2501c128-3fe4-46d7-bc55-201cf393d5e1\") " pod="calico-system/whisker-7f77999cb5-qpzlk" May 27 17:05:07.355542 kubelet[3382]: I0527 17:05:07.355511 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk55q\" (UniqueName: \"kubernetes.io/projected/2501c128-3fe4-46d7-bc55-201cf393d5e1-kube-api-access-kk55q\") pod \"whisker-7f77999cb5-qpzlk\" (UID: \"2501c128-3fe4-46d7-bc55-201cf393d5e1\") " pod="calico-system/whisker-7f77999cb5-qpzlk" May 27 17:05:07.355542 kubelet[3382]: I0527 17:05:07.355539 3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2501c128-3fe4-46d7-bc55-201cf393d5e1-whisker-backend-key-pair\") pod \"whisker-7f77999cb5-qpzlk\" (UID: \"2501c128-3fe4-46d7-bc55-201cf393d5e1\") " pod="calico-system/whisker-7f77999cb5-qpzlk" May 27 17:05:07.356325 containerd[1871]: 2025-05-27 17:05:06.207 [INFO][4630] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:05:07.356325 containerd[1871]: 2025-05-27 17:05:06.227 [INFO][4630] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--xwp7r-eth0 coredns-674b8bbfcf- kube-system 30f36697-5f7e-4ded-934b-1780a0c861a4 847 0 2025-05-27 17:04:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.0.0-a-910621710e coredns-674b8bbfcf-xwp7r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali13292a953b3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwp7r" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--xwp7r-" May 27 17:05:07.356325 containerd[1871]: 2025-05-27 17:05:06.227 [INFO][4630] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwp7r" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--xwp7r-eth0" May 27 17:05:07.356325 containerd[1871]: 2025-05-27 17:05:06.255 [INFO][4645] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95" HandleID="k8s-pod-network.47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95" Workload="ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--xwp7r-eth0" May 27 17:05:07.356503 containerd[1871]: 2025-05-27 17:05:06.255 [INFO][4645] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95" HandleID="k8s-pod-network.47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95" Workload="ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--xwp7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a94d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.0.0-a-910621710e", "pod":"coredns-674b8bbfcf-xwp7r", "timestamp":"2025-05-27 17:05:06.255269798 +0000 UTC"}, Hostname:"ci-4344.0.0-a-910621710e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:05:07.356503 containerd[1871]: 2025-05-27 17:05:06.255 [INFO][4645] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:05:07.356503 containerd[1871]: 2025-05-27 17:05:06.291 [INFO][4645] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:05:07.356503 containerd[1871]: 2025-05-27 17:05:06.291 [INFO][4645] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-910621710e' May 27 17:05:07.356503 containerd[1871]: 2025-05-27 17:05:06.361 [INFO][4645] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95" host="ci-4344.0.0-a-910621710e" May 27 17:05:07.356503 containerd[1871]: 2025-05-27 17:05:06.390 [INFO][4645] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-910621710e" May 27 17:05:07.356503 containerd[1871]: 2025-05-27 17:05:06.405 [INFO][4645] ipam/ipam.go 511: Trying affinity for 192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:07.356503 containerd[1871]: 2025-05-27 17:05:06.411 [INFO][4645] ipam/ipam.go 158: Attempting to load block cidr=192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:07.356503 containerd[1871]: 2025-05-27 17:05:06.430 [INFO][4645] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:07.356635 containerd[1871]: 2025-05-27 17:05:06.430 [INFO][4645] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.118.128/26 handle="k8s-pod-network.47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95" host="ci-4344.0.0-a-910621710e" May 27 17:05:07.356635 containerd[1871]: 2025-05-27 17:05:06.444 [INFO][4645] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95 May 27 17:05:07.356635 containerd[1871]: 2025-05-27 17:05:06.460 [INFO][4645] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.118.128/26 handle="k8s-pod-network.47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95" host="ci-4344.0.0-a-910621710e" May 27 17:05:07.356635 containerd[1871]: 2025-05-27 17:05:06.471 [INFO][4645] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.118.130/26] block=192.168.118.128/26 handle="k8s-pod-network.47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95" host="ci-4344.0.0-a-910621710e" May 27 17:05:07.356635 containerd[1871]: 2025-05-27 17:05:07.279 [INFO][4645] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.118.130/26] handle="k8s-pod-network.47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95" host="ci-4344.0.0-a-910621710e" May 27 17:05:07.356635 containerd[1871]: 2025-05-27 17:05:07.279 [INFO][4645] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:05:07.356635 containerd[1871]: 2025-05-27 17:05:07.279 [INFO][4645] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.118.130/26] IPv6=[] ContainerID="47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95" HandleID="k8s-pod-network.47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95" Workload="ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--xwp7r-eth0" May 27 17:05:07.356726 containerd[1871]: 2025-05-27 17:05:07.287 [INFO][4630] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwp7r" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--xwp7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--xwp7r-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"30f36697-5f7e-4ded-934b-1780a0c861a4", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-910621710e", ContainerID:"", Pod:"coredns-674b8bbfcf-xwp7r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali13292a953b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:07.356726 containerd[1871]: 2025-05-27 17:05:07.287 [INFO][4630] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.130/32] ContainerID="47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwp7r" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--xwp7r-eth0" May 27 17:05:07.356726 containerd[1871]: 2025-05-27 17:05:07.287 [INFO][4630] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13292a953b3 ContainerID="47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwp7r" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--xwp7r-eth0" May 27 17:05:07.356726 containerd[1871]: 2025-05-27 17:05:07.311 [INFO][4630] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwp7r" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--xwp7r-eth0" May 27 17:05:07.356726 containerd[1871]: 2025-05-27 17:05:07.318 [INFO][4630] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwp7r" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--xwp7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--xwp7r-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"30f36697-5f7e-4ded-934b-1780a0c861a4", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-910621710e", ContainerID:"47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95", Pod:"coredns-674b8bbfcf-xwp7r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali13292a953b3", MAC:"02:8b:3d:3c:b5:3d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:07.356726 containerd[1871]: 2025-05-27 17:05:07.347 [INFO][4630] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95" Namespace="kube-system" Pod="coredns-674b8bbfcf-xwp7r" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--xwp7r-eth0" May 27 17:05:07.488429 systemd-networkd[1692]: califa624d6e8c5: Gained IPv6LL May 27 17:05:07.528900 systemd-networkd[1692]: cali176436b514b: Link UP May 27 17:05:07.530100 systemd-networkd[1692]: cali176436b514b: Gained carrier May 27 17:05:07.548756 containerd[1871]: 2025-05-27 17:05:07.388 [INFO][4745] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 17:05:07.548756 containerd[1871]: 2025-05-27 17:05:07.409 [INFO][4745] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--910621710e-k8s-goldmane--78d55f7ddc--2hnhp-eth0 goldmane-78d55f7ddc- calico-system 1ed3902b-f80a-4d5d-a3a8-c425965d5219 846 0 2025-05-27 17:04:39 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344.0.0-a-910621710e goldmane-78d55f7ddc-2hnhp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali176436b514b [] [] }} ContainerID="d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-2hnhp" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-goldmane--78d55f7ddc--2hnhp-" May 27 17:05:07.548756 containerd[1871]: 2025-05-27 17:05:07.409 [INFO][4745] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-2hnhp" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-goldmane--78d55f7ddc--2hnhp-eth0" May 27 17:05:07.548756 containerd[1871]: 2025-05-27 17:05:07.440 [INFO][4813] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea" HandleID="k8s-pod-network.d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea" Workload="ci--4344.0.0--a--910621710e-k8s-goldmane--78d55f7ddc--2hnhp-eth0" May 27 17:05:07.548756 containerd[1871]: 2025-05-27 17:05:07.441 [INFO][4813] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea" HandleID="k8s-pod-network.d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea" Workload="ci--4344.0.0--a--910621710e-k8s-goldmane--78d55f7ddc--2hnhp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d7630), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.0.0-a-910621710e", "pod":"goldmane-78d55f7ddc-2hnhp", "timestamp":"2025-05-27 17:05:07.44077201 +0000 UTC"}, Hostname:"ci-4344.0.0-a-910621710e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:05:07.548756 containerd[1871]: 2025-05-27 17:05:07.441 [INFO][4813] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:05:07.548756 containerd[1871]: 2025-05-27 17:05:07.441 [INFO][4813] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:05:07.548756 containerd[1871]: 2025-05-27 17:05:07.441 [INFO][4813] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-910621710e' May 27 17:05:07.548756 containerd[1871]: 2025-05-27 17:05:07.463 [INFO][4813] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea" host="ci-4344.0.0-a-910621710e" May 27 17:05:07.548756 containerd[1871]: 2025-05-27 17:05:07.486 [INFO][4813] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-910621710e" May 27 17:05:07.548756 containerd[1871]: 2025-05-27 17:05:07.495 [INFO][4813] ipam/ipam.go 511: Trying affinity for 192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:07.548756 containerd[1871]: 2025-05-27 17:05:07.497 [INFO][4813] ipam/ipam.go 158: Attempting to load block cidr=192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:07.548756 containerd[1871]: 2025-05-27 17:05:07.501 [INFO][4813] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:07.548756 containerd[1871]: 2025-05-27 17:05:07.501 [INFO][4813] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.118.128/26 handle="k8s-pod-network.d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea" host="ci-4344.0.0-a-910621710e" May 27 17:05:07.548756 containerd[1871]: 2025-05-27 17:05:07.503 [INFO][4813] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea May 27 17:05:07.548756 containerd[1871]: 2025-05-27 17:05:07.509 [INFO][4813] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.118.128/26 handle="k8s-pod-network.d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea" host="ci-4344.0.0-a-910621710e" May 27 17:05:07.548756 containerd[1871]: 2025-05-27 17:05:07.521 [INFO][4813] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.118.131/26] block=192.168.118.128/26 handle="k8s-pod-network.d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea" host="ci-4344.0.0-a-910621710e" May 27 17:05:07.548756 containerd[1871]: 2025-05-27 17:05:07.521 [INFO][4813] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.118.131/26] handle="k8s-pod-network.d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea" host="ci-4344.0.0-a-910621710e" May 27 17:05:07.548756 containerd[1871]: 2025-05-27 17:05:07.521 [INFO][4813] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:05:07.548756 containerd[1871]: 2025-05-27 17:05:07.521 [INFO][4813] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.118.131/26] IPv6=[] ContainerID="d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea" HandleID="k8s-pod-network.d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea" Workload="ci--4344.0.0--a--910621710e-k8s-goldmane--78d55f7ddc--2hnhp-eth0" May 27 17:05:07.550644 containerd[1871]: 2025-05-27 17:05:07.523 [INFO][4745] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-2hnhp" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-goldmane--78d55f7ddc--2hnhp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--910621710e-k8s-goldmane--78d55f7ddc--2hnhp-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"1ed3902b-f80a-4d5d-a3a8-c425965d5219", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-910621710e", ContainerID:"", Pod:"goldmane-78d55f7ddc-2hnhp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.118.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali176436b514b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:07.550644 containerd[1871]: 2025-05-27 17:05:07.523 [INFO][4745] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.131/32] ContainerID="d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-2hnhp" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-goldmane--78d55f7ddc--2hnhp-eth0" May 27 17:05:07.550644 containerd[1871]: 2025-05-27 17:05:07.523 [INFO][4745] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali176436b514b ContainerID="d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-2hnhp" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-goldmane--78d55f7ddc--2hnhp-eth0" May 27 17:05:07.550644 containerd[1871]: 2025-05-27 17:05:07.530 [INFO][4745] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-2hnhp" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-goldmane--78d55f7ddc--2hnhp-eth0" May 27 17:05:07.550644 containerd[1871]: 2025-05-27 17:05:07.531 [INFO][4745] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-2hnhp" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-goldmane--78d55f7ddc--2hnhp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--910621710e-k8s-goldmane--78d55f7ddc--2hnhp-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"1ed3902b-f80a-4d5d-a3a8-c425965d5219", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-910621710e", ContainerID:"d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea", Pod:"goldmane-78d55f7ddc-2hnhp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.118.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali176436b514b", MAC:"2e:72:d4:49:42:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:07.550644 containerd[1871]: 2025-05-27 17:05:07.543 [INFO][4745] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-2hnhp" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-goldmane--78d55f7ddc--2hnhp-eth0" May 27 17:05:07.625900 containerd[1871]: time="2025-05-27T17:05:07.625825568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f77999cb5-qpzlk,Uid:2501c128-3fe4-46d7-bc55-201cf393d5e1,Namespace:calico-system,Attempt:0,}" May 27 17:05:08.118175 containerd[1871]: time="2025-05-27T17:05:08.117683186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cd4d8cbd-vbsh5,Uid:3da35495-9316-4296-a7ef-679ef4f4be92,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62\"" May 27 17:05:08.125952 containerd[1871]: time="2025-05-27T17:05:08.125907779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 17:05:08.164202 kubelet[3382]: I0527 17:05:08.163754 3382 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6168d23d-915e-4571-a018-ac172eb9454e" path="/var/lib/kubelet/pods/6168d23d-915e-4571-a018-ac172eb9454e/volumes" May 27 17:05:08.294277 systemd-networkd[1692]: vxlan.calico: Link UP May 27 17:05:08.294286 systemd-networkd[1692]: vxlan.calico: Gained carrier May 27 17:05:08.959177 systemd-networkd[1692]: cali176436b514b: Gained IPv6LL May 27 17:05:09.023340 systemd-networkd[1692]: cali13292a953b3: Gained IPv6LL May 27 17:05:09.343168 systemd-networkd[1692]: vxlan.calico: Gained IPv6LL May 27 17:05:10.295700 systemd-networkd[1692]: calief4fa4a6d24: Link UP May 27 17:05:10.296893 systemd-networkd[1692]: calief4fa4a6d24: Gained carrier May 27 17:05:10.318272 containerd[1871]: 2025-05-27 17:05:10.212 [INFO][4952] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--910621710e-k8s-whisker--7f77999cb5--qpzlk-eth0 whisker-7f77999cb5- calico-system 2501c128-3fe4-46d7-bc55-201cf393d5e1 944 0 2025-05-27 17:05:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7f77999cb5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344.0.0-a-910621710e whisker-7f77999cb5-qpzlk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calief4fa4a6d24 [] [] }} ContainerID="0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c" Namespace="calico-system" Pod="whisker-7f77999cb5-qpzlk" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-whisker--7f77999cb5--qpzlk-" May 27 17:05:10.318272 containerd[1871]: 2025-05-27 17:05:10.212 [INFO][4952] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c" Namespace="calico-system" Pod="whisker-7f77999cb5-qpzlk" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-whisker--7f77999cb5--qpzlk-eth0" May 27 17:05:10.318272 containerd[1871]: 2025-05-27 17:05:10.242 [INFO][4975] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c" HandleID="k8s-pod-network.0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c" Workload="ci--4344.0.0--a--910621710e-k8s-whisker--7f77999cb5--qpzlk-eth0" May 27 17:05:10.318272 containerd[1871]: 2025-05-27 17:05:10.243 [INFO][4975] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c" HandleID="k8s-pod-network.0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c" Workload="ci--4344.0.0--a--910621710e-k8s-whisker--7f77999cb5--qpzlk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d7020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.0.0-a-910621710e", "pod":"whisker-7f77999cb5-qpzlk", "timestamp":"2025-05-27 17:05:10.242797572 +0000 UTC"}, Hostname:"ci-4344.0.0-a-910621710e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:05:10.318272 containerd[1871]: 2025-05-27 17:05:10.243 [INFO][4975] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:05:10.318272 containerd[1871]: 2025-05-27 17:05:10.243 [INFO][4975] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:05:10.318272 containerd[1871]: 2025-05-27 17:05:10.243 [INFO][4975] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-910621710e' May 27 17:05:10.318272 containerd[1871]: 2025-05-27 17:05:10.253 [INFO][4975] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c" host="ci-4344.0.0-a-910621710e" May 27 17:05:10.318272 containerd[1871]: 2025-05-27 17:05:10.259 [INFO][4975] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-910621710e" May 27 17:05:10.318272 containerd[1871]: 2025-05-27 17:05:10.264 [INFO][4975] ipam/ipam.go 511: Trying affinity for 192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:10.318272 containerd[1871]: 2025-05-27 17:05:10.267 [INFO][4975] ipam/ipam.go 158: Attempting to load block cidr=192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:10.318272 containerd[1871]: 2025-05-27 17:05:10.269 [INFO][4975] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:10.318272 containerd[1871]: 2025-05-27 17:05:10.269 [INFO][4975] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.118.128/26 handle="k8s-pod-network.0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c" host="ci-4344.0.0-a-910621710e" May 27 17:05:10.318272 containerd[1871]: 2025-05-27 17:05:10.273 [INFO][4975] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c May 27 17:05:10.318272 containerd[1871]: 2025-05-27 17:05:10.278 [INFO][4975] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.118.128/26 handle="k8s-pod-network.0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c" host="ci-4344.0.0-a-910621710e" May 27 17:05:10.318272 containerd[1871]: 2025-05-27 17:05:10.286 [INFO][4975] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.118.132/26] block=192.168.118.128/26 handle="k8s-pod-network.0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c" host="ci-4344.0.0-a-910621710e" May 27 17:05:10.318272 containerd[1871]: 2025-05-27 17:05:10.286 [INFO][4975] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.118.132/26] handle="k8s-pod-network.0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c" host="ci-4344.0.0-a-910621710e" May 27 17:05:10.318272 containerd[1871]: 2025-05-27 17:05:10.286 [INFO][4975] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:05:10.318272 containerd[1871]: 2025-05-27 17:05:10.286 [INFO][4975] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.118.132/26] IPv6=[] ContainerID="0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c" HandleID="k8s-pod-network.0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c" Workload="ci--4344.0.0--a--910621710e-k8s-whisker--7f77999cb5--qpzlk-eth0" May 27 17:05:10.318961 containerd[1871]: 2025-05-27 17:05:10.289 [INFO][4952] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c" Namespace="calico-system" Pod="whisker-7f77999cb5-qpzlk" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-whisker--7f77999cb5--qpzlk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--910621710e-k8s-whisker--7f77999cb5--qpzlk-eth0", GenerateName:"whisker-7f77999cb5-", Namespace:"calico-system", SelfLink:"", UID:"2501c128-3fe4-46d7-bc55-201cf393d5e1", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 5, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f77999cb5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-910621710e", ContainerID:"", Pod:"whisker-7f77999cb5-qpzlk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.118.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calief4fa4a6d24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:10.318961 containerd[1871]: 2025-05-27 17:05:10.289 [INFO][4952] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.132/32] ContainerID="0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c" Namespace="calico-system" Pod="whisker-7f77999cb5-qpzlk" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-whisker--7f77999cb5--qpzlk-eth0" May 27 17:05:10.318961 containerd[1871]: 2025-05-27 17:05:10.289 [INFO][4952] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calief4fa4a6d24 ContainerID="0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c" Namespace="calico-system" Pod="whisker-7f77999cb5-qpzlk" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-whisker--7f77999cb5--qpzlk-eth0" May 27 17:05:10.318961 containerd[1871]: 2025-05-27 17:05:10.297 [INFO][4952] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c" Namespace="calico-system" Pod="whisker-7f77999cb5-qpzlk" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-whisker--7f77999cb5--qpzlk-eth0" May 27 17:05:10.318961 containerd[1871]: 2025-05-27 17:05:10.300 [INFO][4952] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c" Namespace="calico-system" Pod="whisker-7f77999cb5-qpzlk" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-whisker--7f77999cb5--qpzlk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--910621710e-k8s-whisker--7f77999cb5--qpzlk-eth0", GenerateName:"whisker-7f77999cb5-", Namespace:"calico-system", SelfLink:"", UID:"2501c128-3fe4-46d7-bc55-201cf393d5e1", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 5, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f77999cb5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-910621710e", ContainerID:"0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c", Pod:"whisker-7f77999cb5-qpzlk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.118.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calief4fa4a6d24", MAC:"52:01:f4:32:0e:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:10.318961 containerd[1871]: 2025-05-27 17:05:10.313 [INFO][4952] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c" Namespace="calico-system" Pod="whisker-7f77999cb5-qpzlk" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-whisker--7f77999cb5--qpzlk-eth0" May 27 17:05:10.481034 containerd[1871]: time="2025-05-27T17:05:10.480954176Z" level=info msg="connecting to shim 47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95" address="unix:///run/containerd/s/afc5a482961fcbe0a020e1f7cdef251a0cea6bee55edcb7182004a1ddfadaf62" namespace=k8s.io protocol=ttrpc version=3 May 27 17:05:10.508207 containerd[1871]: time="2025-05-27T17:05:10.508138860Z" level=info msg="connecting to shim d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea" address="unix:///run/containerd/s/d0a29ddb5fb625c3334a0978751b8beffc095a5330d687e70daaf38e4bfc6cbd" namespace=k8s.io protocol=ttrpc version=3 May 27 17:05:10.512366 systemd[1]: Started cri-containerd-47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95.scope - libcontainer container 47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95. May 27 17:05:10.540384 containerd[1871]: time="2025-05-27T17:05:10.540318911Z" level=info msg="connecting to shim 0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c" address="unix:///run/containerd/s/bb74e4f8699c24a2598b1f499f3fa9a1b0de7a95b0a32d9c851ffaed1cb847ce" namespace=k8s.io protocol=ttrpc version=3 May 27 17:05:10.551540 systemd[1]: Started cri-containerd-d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea.scope - libcontainer container d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea. May 27 17:05:10.585403 systemd[1]: Started cri-containerd-0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c.scope - libcontainer container 0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c. May 27 17:05:10.604017 containerd[1871]: time="2025-05-27T17:05:10.603515983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xwp7r,Uid:30f36697-5f7e-4ded-934b-1780a0c861a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95\"" May 27 17:05:10.615529 containerd[1871]: time="2025-05-27T17:05:10.615488648Z" level=info msg="CreateContainer within sandbox \"47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:05:10.651817 containerd[1871]: time="2025-05-27T17:05:10.651764436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f77999cb5-qpzlk,Uid:2501c128-3fe4-46d7-bc55-201cf393d5e1,Namespace:calico-system,Attempt:0,} returns sandbox id \"0cbcaf715a4f62ff7d7a7620868902593e64452ddefdde33eb780fce3d51327c\"" May 27 17:05:10.662374 containerd[1871]: time="2025-05-27T17:05:10.662331182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-2hnhp,Uid:1ed3902b-f80a-4d5d-a3a8-c425965d5219,Namespace:calico-system,Attempt:0,} returns sandbox id \"d92f41c58e5f93a409f62385dc022eba2fc9cdcea26b152ff8cf8ec53e7161ea\"" May 27 17:05:10.668609 containerd[1871]: time="2025-05-27T17:05:10.668521910Z" level=info msg="Container b5e3cd2324fea3e099e0255afd5bcbc7b79531699b5100e2aeec893f9e0445d1: CDI devices from CRI Config.CDIDevices: []" May 27 17:05:10.689723 containerd[1871]: time="2025-05-27T17:05:10.689672793Z" level=info msg="CreateContainer within sandbox \"47dedf2f467548aa7e7a02697e1a132fd81d3bb057378f415269f5be60109b95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b5e3cd2324fea3e099e0255afd5bcbc7b79531699b5100e2aeec893f9e0445d1\"" May 27 17:05:10.690728 containerd[1871]: time="2025-05-27T17:05:10.690658281Z" level=info msg="StartContainer for \"b5e3cd2324fea3e099e0255afd5bcbc7b79531699b5100e2aeec893f9e0445d1\"" May 27 17:05:10.692468 containerd[1871]: time="2025-05-27T17:05:10.692252089Z" level=info msg="connecting to shim b5e3cd2324fea3e099e0255afd5bcbc7b79531699b5100e2aeec893f9e0445d1" address="unix:///run/containerd/s/afc5a482961fcbe0a020e1f7cdef251a0cea6bee55edcb7182004a1ddfadaf62" protocol=ttrpc version=3 May 27 17:05:10.712188 systemd[1]: Started cri-containerd-b5e3cd2324fea3e099e0255afd5bcbc7b79531699b5100e2aeec893f9e0445d1.scope - libcontainer container b5e3cd2324fea3e099e0255afd5bcbc7b79531699b5100e2aeec893f9e0445d1. May 27 17:05:10.746921 containerd[1871]: time="2025-05-27T17:05:10.746871920Z" level=info msg="StartContainer for \"b5e3cd2324fea3e099e0255afd5bcbc7b79531699b5100e2aeec893f9e0445d1\" returns successfully" May 27 17:05:11.200194 update_engine[1852]: I20250527 17:05:11.200050 1852 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 17:05:11.200771 update_engine[1852]: I20250527 17:05:11.200505 1852 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 17:05:11.200839 update_engine[1852]: I20250527 17:05:11.200804 1852 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 17:05:11.320578 update_engine[1852]: E20250527 17:05:11.320506 1852 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 17:05:11.320726 update_engine[1852]: I20250527 17:05:11.320603 1852 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 27 17:05:11.381150 kubelet[3382]: I0527 17:05:11.381046 3382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xwp7r" podStartSLOduration=45.381028296 podStartE2EDuration="45.381028296s" podCreationTimestamp="2025-05-27 17:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:05:11.379855385 +0000 UTC m=+49.399369622" watchObservedRunningTime="2025-05-27 17:05:11.381028296 +0000 UTC m=+49.400542533" May 27 17:05:11.391196 systemd-networkd[1692]: calief4fa4a6d24: Gained IPv6LL May 27 17:05:12.351152 containerd[1871]: time="2025-05-27T17:05:12.351096295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:12.353806 containerd[1871]: time="2025-05-27T17:05:12.353627766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=44453213" May 27 17:05:12.362061 containerd[1871]: time="2025-05-27T17:05:12.361296562Z" level=info msg="ImageCreate event name:\"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:12.366866 containerd[1871]: time="2025-05-27T17:05:12.366803887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:12.367406 containerd[1871]: time="2025-05-27T17:05:12.367128503Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"45822470\" in 4.241178116s" May 27 17:05:12.367406 containerd[1871]: time="2025-05-27T17:05:12.367155095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 27 17:05:12.369817 containerd[1871]: time="2025-05-27T17:05:12.369780358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:05:12.376013 containerd[1871]: time="2025-05-27T17:05:12.375915203Z" level=info msg="CreateContainer within sandbox \"528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 17:05:12.418008 containerd[1871]: time="2025-05-27T17:05:12.417763806Z" level=info msg="Container 061aa4e551a1a685cbf8b34cbf008b1080f712dd136408b339d0c5340a659117: CDI devices from CRI Config.CDIDevices: []" May 27 17:05:12.447885 containerd[1871]: time="2025-05-27T17:05:12.447836984Z" level=info msg="CreateContainer within sandbox \"528aed811b048d0eb5dd2e04c45a2761a1470630c6e096fb45e16cb078c29a62\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"061aa4e551a1a685cbf8b34cbf008b1080f712dd136408b339d0c5340a659117\"" May 27 17:05:12.448860 containerd[1871]: time="2025-05-27T17:05:12.448678872Z" level=info msg="StartContainer for \"061aa4e551a1a685cbf8b34cbf008b1080f712dd136408b339d0c5340a659117\"" May 27 17:05:12.450092 containerd[1871]: time="2025-05-27T17:05:12.450044351Z" level=info msg="connecting to shim 061aa4e551a1a685cbf8b34cbf008b1080f712dd136408b339d0c5340a659117" address="unix:///run/containerd/s/b67ab516570115d39f4bfa6d929237cd7b6de7853aba6bbf302bff703fe30d3b" protocol=ttrpc version=3 May 27 17:05:12.470190 systemd[1]: Started cri-containerd-061aa4e551a1a685cbf8b34cbf008b1080f712dd136408b339d0c5340a659117.scope - libcontainer container 061aa4e551a1a685cbf8b34cbf008b1080f712dd136408b339d0c5340a659117. May 27 17:05:12.510122 containerd[1871]: time="2025-05-27T17:05:12.509884554Z" level=info msg="StartContainer for \"061aa4e551a1a685cbf8b34cbf008b1080f712dd136408b339d0c5340a659117\" returns successfully" May 27 17:05:12.527920 containerd[1871]: time="2025-05-27T17:05:12.527216009Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:05:12.538968 containerd[1871]: time="2025-05-27T17:05:12.538804947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:05:12.539170 containerd[1871]: time="2025-05-27T17:05:12.539022083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:05:12.541093 kubelet[3382]: E0527 17:05:12.541047 3382 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:05:12.542000 kubelet[3382]: E0527 17:05:12.541796 3382 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:05:12.544950 containerd[1871]: time="2025-05-27T17:05:12.544856809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:05:12.548364 kubelet[3382]: E0527 17:05:12.548052 3382 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:12ab7f8fc8ea4181b794bd8076014a03,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kk55q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f77999cb5-qpzlk_calico-system(2501c128-3fe4-46d7-bc55-201cf393d5e1): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:05:12.748873 containerd[1871]: time="2025-05-27T17:05:12.748817349Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:05:12.751699 containerd[1871]: time="2025-05-27T17:05:12.751643324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:05:12.751910 containerd[1871]: time="2025-05-27T17:05:12.751665812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:05:12.752172 kubelet[3382]: E0527 17:05:12.752123 3382 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:05:12.752458 kubelet[3382]: E0527 17:05:12.752187 3382 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:05:12.753079 containerd[1871]: time="2025-05-27T17:05:12.752778835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:05:12.753166 kubelet[3382]: E0527 17:05:12.752859 3382 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lprff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-2hnhp_calico-system(1ed3902b-f80a-4d5d-a3a8-c425965d5219): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:05:12.754121 kubelet[3382]: E0527 17:05:12.754085 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-2hnhp" podUID="1ed3902b-f80a-4d5d-a3a8-c425965d5219" May 27 17:05:12.975029 containerd[1871]: time="2025-05-27T17:05:12.974884871Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:05:12.979954 containerd[1871]: time="2025-05-27T17:05:12.978083797Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:05:12.980146 containerd[1871]: time="2025-05-27T17:05:12.980026116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:05:12.980410 kubelet[3382]: E0527 17:05:12.980364 3382 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:05:12.980489 kubelet[3382]: E0527 17:05:12.980422 3382 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:05:12.980682 kubelet[3382]: E0527 17:05:12.980562 3382 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kk55q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f77999cb5-qpzlk_calico-system(2501c128-3fe4-46d7-bc55-201cf393d5e1): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:05:12.981948 kubelet[3382]: E0527 17:05:12.981885 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f77999cb5-qpzlk" podUID="2501c128-3fe4-46d7-bc55-201cf393d5e1" May 27 17:05:13.373522 kubelet[3382]: E0527 17:05:13.373223 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-2hnhp" podUID="1ed3902b-f80a-4d5d-a3a8-c425965d5219" May 27 17:05:13.374063 kubelet[3382]: E0527 17:05:13.373978 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f77999cb5-qpzlk" podUID="2501c128-3fe4-46d7-bc55-201cf393d5e1" May 27 17:05:14.009233 kubelet[3382]: I0527 17:05:14.009099 3382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68cd4d8cbd-vbsh5" podStartSLOduration=33.76241541 podStartE2EDuration="38.009081671s" podCreationTimestamp="2025-05-27 17:04:36 +0000 UTC" firstStartedPulling="2025-05-27 17:05:08.121335322 +0000 UTC m=+46.140849559" lastFinishedPulling="2025-05-27 17:05:12.368001583 +0000 UTC m=+50.387515820" observedRunningTime="2025-05-27 17:05:13.442962611 +0000 UTC m=+51.462476864" watchObservedRunningTime="2025-05-27 17:05:14.009081671 +0000 UTC m=+52.028595908" May 27 17:05:16.161354 containerd[1871]: time="2025-05-27T17:05:16.160851142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cd4d8cbd-298nw,Uid:840ee727-191a-49f8-8e10-04e6abda410d,Namespace:calico-apiserver,Attempt:0,}" May 27 17:05:16.276832 systemd-networkd[1692]: calid33eacf1fb0: Link UP May 27 17:05:16.277904 systemd-networkd[1692]: calid33eacf1fb0: Gained carrier May 27 17:05:16.297332 containerd[1871]: 2025-05-27 17:05:16.205 [INFO][5232] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--298nw-eth0 calico-apiserver-68cd4d8cbd- calico-apiserver 840ee727-191a-49f8-8e10-04e6abda410d 845 0 2025-05-27 17:04:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68cd4d8cbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.0.0-a-910621710e calico-apiserver-68cd4d8cbd-298nw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid33eacf1fb0 [] [] }} ContainerID="16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f" Namespace="calico-apiserver" Pod="calico-apiserver-68cd4d8cbd-298nw" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--298nw-" May 27 17:05:16.297332 containerd[1871]: 2025-05-27 17:05:16.205 [INFO][5232] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f" Namespace="calico-apiserver" Pod="calico-apiserver-68cd4d8cbd-298nw" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--298nw-eth0" May 27 17:05:16.297332 containerd[1871]: 2025-05-27 17:05:16.230 [INFO][5244] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f" HandleID="k8s-pod-network.16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f" Workload="ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--298nw-eth0" May 27 17:05:16.297332 containerd[1871]: 2025-05-27 17:05:16.230 [INFO][5244] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f" HandleID="k8s-pod-network.16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f" Workload="ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--298nw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d7630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.0.0-a-910621710e", "pod":"calico-apiserver-68cd4d8cbd-298nw", "timestamp":"2025-05-27 17:05:16.230270148 +0000 UTC"}, Hostname:"ci-4344.0.0-a-910621710e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:05:16.297332 containerd[1871]: 2025-05-27 17:05:16.230 [INFO][5244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:05:16.297332 containerd[1871]: 2025-05-27 17:05:16.230 [INFO][5244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:05:16.297332 containerd[1871]: 2025-05-27 17:05:16.230 [INFO][5244] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-910621710e' May 27 17:05:16.297332 containerd[1871]: 2025-05-27 17:05:16.237 [INFO][5244] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f" host="ci-4344.0.0-a-910621710e" May 27 17:05:16.297332 containerd[1871]: 2025-05-27 17:05:16.243 [INFO][5244] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-910621710e" May 27 17:05:16.297332 containerd[1871]: 2025-05-27 17:05:16.250 [INFO][5244] ipam/ipam.go 511: Trying affinity for 192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:16.297332 containerd[1871]: 2025-05-27 17:05:16.252 [INFO][5244] ipam/ipam.go 158: Attempting to load block cidr=192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:16.297332 containerd[1871]: 2025-05-27 17:05:16.255 [INFO][5244] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:16.297332 containerd[1871]: 2025-05-27 17:05:16.255 [INFO][5244] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.118.128/26 handle="k8s-pod-network.16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f" host="ci-4344.0.0-a-910621710e" May 27 17:05:16.297332 containerd[1871]: 2025-05-27 17:05:16.256 [INFO][5244] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f May 27 17:05:16.297332 containerd[1871]: 2025-05-27 17:05:16.263 [INFO][5244] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.118.128/26 handle="k8s-pod-network.16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f" host="ci-4344.0.0-a-910621710e" May 27 17:05:16.297332 containerd[1871]: 2025-05-27 17:05:16.271 [INFO][5244] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.118.133/26] block=192.168.118.128/26 handle="k8s-pod-network.16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f" host="ci-4344.0.0-a-910621710e" May 27 17:05:16.297332 containerd[1871]: 2025-05-27 17:05:16.271 [INFO][5244] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.118.133/26] handle="k8s-pod-network.16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f" host="ci-4344.0.0-a-910621710e" May 27 17:05:16.297332 containerd[1871]: 2025-05-27 17:05:16.271 [INFO][5244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:05:16.297332 containerd[1871]: 2025-05-27 17:05:16.271 [INFO][5244] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.118.133/26] IPv6=[] ContainerID="16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f" HandleID="k8s-pod-network.16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f" Workload="ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--298nw-eth0" May 27 17:05:16.301676 containerd[1871]: 2025-05-27 17:05:16.273 [INFO][5232] cni-plugin/k8s.go 418: Populated endpoint ContainerID="16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f" Namespace="calico-apiserver" Pod="calico-apiserver-68cd4d8cbd-298nw" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--298nw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--298nw-eth0", GenerateName:"calico-apiserver-68cd4d8cbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"840ee727-191a-49f8-8e10-04e6abda410d", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cd4d8cbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-910621710e", ContainerID:"", Pod:"calico-apiserver-68cd4d8cbd-298nw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid33eacf1fb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:16.301676 containerd[1871]: 2025-05-27 17:05:16.273 [INFO][5232] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.133/32] ContainerID="16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f" Namespace="calico-apiserver" Pod="calico-apiserver-68cd4d8cbd-298nw" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--298nw-eth0" May 27 17:05:16.301676 containerd[1871]: 2025-05-27 17:05:16.274 [INFO][5232] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid33eacf1fb0 ContainerID="16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f" Namespace="calico-apiserver" Pod="calico-apiserver-68cd4d8cbd-298nw" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--298nw-eth0" May 27 17:05:16.301676 containerd[1871]: 2025-05-27 17:05:16.276 [INFO][5232] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f" Namespace="calico-apiserver" Pod="calico-apiserver-68cd4d8cbd-298nw" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--298nw-eth0" May 27 17:05:16.301676 containerd[1871]: 2025-05-27 17:05:16.277 [INFO][5232] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f" Namespace="calico-apiserver" Pod="calico-apiserver-68cd4d8cbd-298nw" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--298nw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--298nw-eth0", GenerateName:"calico-apiserver-68cd4d8cbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"840ee727-191a-49f8-8e10-04e6abda410d", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cd4d8cbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-910621710e", ContainerID:"16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f", Pod:"calico-apiserver-68cd4d8cbd-298nw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid33eacf1fb0", MAC:"ce:ec:bd:65:e0:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:16.301676 containerd[1871]: 2025-05-27 17:05:16.291 [INFO][5232] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f" Namespace="calico-apiserver" Pod="calico-apiserver-68cd4d8cbd-298nw" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--apiserver--68cd4d8cbd--298nw-eth0" May 27 17:05:16.380555 containerd[1871]: time="2025-05-27T17:05:16.380484899Z" level=info msg="connecting to shim 16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f" address="unix:///run/containerd/s/e1412aa8afeafb8f7fbc28fd5ab1b0fdf52742eec69bad94060b962e0ee8d4f1" namespace=k8s.io protocol=ttrpc version=3 May 27 17:05:16.407193 systemd[1]: Started cri-containerd-16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f.scope - libcontainer container 16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f. May 27 17:05:16.443370 containerd[1871]: time="2025-05-27T17:05:16.443326956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cd4d8cbd-298nw,Uid:840ee727-191a-49f8-8e10-04e6abda410d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f\"" May 27 17:05:16.452455 containerd[1871]: time="2025-05-27T17:05:16.452410639Z" level=info msg="CreateContainer within sandbox \"16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 17:05:16.495474 containerd[1871]: time="2025-05-27T17:05:16.495196459Z" level=info msg="Container 7a6f7755d9f65beb558483e60a8cfae1dbfc72f6033382c530e14de6221ed3a6: CDI devices from CRI Config.CDIDevices: []" May 27 17:05:16.495856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2863160947.mount: Deactivated successfully. May 27 17:05:16.517935 containerd[1871]: time="2025-05-27T17:05:16.517887992Z" level=info msg="CreateContainer within sandbox \"16904f2d209b852d92e677621359521187ad12b23e41aab398d9604df338dc9f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7a6f7755d9f65beb558483e60a8cfae1dbfc72f6033382c530e14de6221ed3a6\"" May 27 17:05:16.518940 containerd[1871]: time="2025-05-27T17:05:16.518723095Z" level=info msg="StartContainer for \"7a6f7755d9f65beb558483e60a8cfae1dbfc72f6033382c530e14de6221ed3a6\"" May 27 17:05:16.520966 containerd[1871]: time="2025-05-27T17:05:16.520910702Z" level=info msg="connecting to shim 7a6f7755d9f65beb558483e60a8cfae1dbfc72f6033382c530e14de6221ed3a6" address="unix:///run/containerd/s/e1412aa8afeafb8f7fbc28fd5ab1b0fdf52742eec69bad94060b962e0ee8d4f1" protocol=ttrpc version=3 May 27 17:05:16.538213 systemd[1]: Started cri-containerd-7a6f7755d9f65beb558483e60a8cfae1dbfc72f6033382c530e14de6221ed3a6.scope - libcontainer container 7a6f7755d9f65beb558483e60a8cfae1dbfc72f6033382c530e14de6221ed3a6. May 27 17:05:16.578866 containerd[1871]: time="2025-05-27T17:05:16.578824322Z" level=info msg="StartContainer for \"7a6f7755d9f65beb558483e60a8cfae1dbfc72f6033382c530e14de6221ed3a6\" returns successfully" May 27 17:05:17.526396 kubelet[3382]: I0527 17:05:17.525732 3382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68cd4d8cbd-298nw" podStartSLOduration=41.525714869 podStartE2EDuration="41.525714869s" podCreationTimestamp="2025-05-27 17:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:05:17.401363795 +0000 UTC m=+55.420878048" watchObservedRunningTime="2025-05-27 17:05:17.525714869 +0000 UTC m=+55.545229106" May 27 17:05:17.535230 systemd-networkd[1692]: calid33eacf1fb0: Gained IPv6LL May 27 17:05:20.160927 containerd[1871]: time="2025-05-27T17:05:20.160554517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tmb64,Uid:9cad5d59-f409-4303-a7b2-86efde3e9bab,Namespace:calico-system,Attempt:0,}" May 27 17:05:20.161892 containerd[1871]: time="2025-05-27T17:05:20.161521485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58544bdc56-m6766,Uid:0f0ca896-7cec-4519-bde0-47ca41dbc403,Namespace:calico-system,Attempt:0,}" May 27 17:05:20.161892 containerd[1871]: time="2025-05-27T17:05:20.161691877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r6nxw,Uid:9e4be9e5-106a-4380-9190-17d1402c83e7,Namespace:kube-system,Attempt:0,}" May 27 17:05:20.381602 systemd-networkd[1692]: calic51106b393f: Link UP May 27 17:05:20.383292 systemd-networkd[1692]: calic51106b393f: Gained carrier May 27 17:05:20.406951 containerd[1871]: 2025-05-27 17:05:20.237 [INFO][5349] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--910621710e-k8s-csi--node--driver--tmb64-eth0 csi-node-driver- calico-system 9cad5d59-f409-4303-a7b2-86efde3e9bab 705 0 2025-05-27 17:04:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344.0.0-a-910621710e csi-node-driver-tmb64 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic51106b393f [] [] }} ContainerID="e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751" Namespace="calico-system" Pod="csi-node-driver-tmb64" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-csi--node--driver--tmb64-" May 27 17:05:20.406951 containerd[1871]: 2025-05-27 17:05:20.237 [INFO][5349] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751" Namespace="calico-system" Pod="csi-node-driver-tmb64" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-csi--node--driver--tmb64-eth0" May 27 17:05:20.406951 containerd[1871]: 2025-05-27 17:05:20.314 [INFO][5385] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751" HandleID="k8s-pod-network.e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751" Workload="ci--4344.0.0--a--910621710e-k8s-csi--node--driver--tmb64-eth0" May 27 17:05:20.406951 containerd[1871]: 2025-05-27 17:05:20.316 [INFO][5385] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751" HandleID="k8s-pod-network.e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751" Workload="ci--4344.0.0--a--910621710e-k8s-csi--node--driver--tmb64-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d840), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.0.0-a-910621710e", "pod":"csi-node-driver-tmb64", "timestamp":"2025-05-27 17:05:20.314295654 +0000 UTC"}, Hostname:"ci-4344.0.0-a-910621710e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:05:20.406951 containerd[1871]: 2025-05-27 17:05:20.316 [INFO][5385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:05:20.406951 containerd[1871]: 2025-05-27 17:05:20.316 [INFO][5385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:05:20.406951 containerd[1871]: 2025-05-27 17:05:20.316 [INFO][5385] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-910621710e' May 27 17:05:20.406951 containerd[1871]: 2025-05-27 17:05:20.329 [INFO][5385] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751" host="ci-4344.0.0-a-910621710e" May 27 17:05:20.406951 containerd[1871]: 2025-05-27 17:05:20.341 [INFO][5385] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-910621710e" May 27 17:05:20.406951 containerd[1871]: 2025-05-27 17:05:20.349 [INFO][5385] ipam/ipam.go 511: Trying affinity for 192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:20.406951 containerd[1871]: 2025-05-27 17:05:20.352 [INFO][5385] ipam/ipam.go 158: Attempting to load block cidr=192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:20.406951 containerd[1871]: 2025-05-27 17:05:20.355 [INFO][5385] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:20.406951 containerd[1871]: 2025-05-27 17:05:20.355 [INFO][5385] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.118.128/26 handle="k8s-pod-network.e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751" host="ci-4344.0.0-a-910621710e" May 27 17:05:20.406951 containerd[1871]: 2025-05-27 17:05:20.357 [INFO][5385] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751 May 27 17:05:20.406951 containerd[1871]: 2025-05-27 17:05:20.364 [INFO][5385] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.118.128/26 handle="k8s-pod-network.e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751" host="ci-4344.0.0-a-910621710e" May 27 17:05:20.406951 containerd[1871]: 2025-05-27 17:05:20.374 [INFO][5385] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.118.134/26] block=192.168.118.128/26 handle="k8s-pod-network.e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751" host="ci-4344.0.0-a-910621710e" May 27 17:05:20.406951 containerd[1871]: 2025-05-27 17:05:20.374 [INFO][5385] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.118.134/26] handle="k8s-pod-network.e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751" host="ci-4344.0.0-a-910621710e" May 27 17:05:20.406951 containerd[1871]: 2025-05-27 17:05:20.375 [INFO][5385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:05:20.406951 containerd[1871]: 2025-05-27 17:05:20.375 [INFO][5385] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.118.134/26] IPv6=[] ContainerID="e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751" HandleID="k8s-pod-network.e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751" Workload="ci--4344.0.0--a--910621710e-k8s-csi--node--driver--tmb64-eth0" May 27 17:05:20.472660 containerd[1871]: 2025-05-27 17:05:20.378 [INFO][5349] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751" Namespace="calico-system" Pod="csi-node-driver-tmb64" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-csi--node--driver--tmb64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--910621710e-k8s-csi--node--driver--tmb64-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9cad5d59-f409-4303-a7b2-86efde3e9bab", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-910621710e", ContainerID:"", Pod:"csi-node-driver-tmb64", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.118.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic51106b393f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:20.472660 containerd[1871]: 2025-05-27 17:05:20.378 [INFO][5349] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.134/32] ContainerID="e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751" Namespace="calico-system" Pod="csi-node-driver-tmb64" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-csi--node--driver--tmb64-eth0" May 27 17:05:20.472660 containerd[1871]: 2025-05-27 17:05:20.378 [INFO][5349] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic51106b393f ContainerID="e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751" Namespace="calico-system" Pod="csi-node-driver-tmb64" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-csi--node--driver--tmb64-eth0" May 27 17:05:20.472660 containerd[1871]: 2025-05-27 17:05:20.380 [INFO][5349] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751" Namespace="calico-system" Pod="csi-node-driver-tmb64" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-csi--node--driver--tmb64-eth0" May 27 17:05:20.472660 containerd[1871]: 2025-05-27 17:05:20.385 [INFO][5349] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751" Namespace="calico-system" Pod="csi-node-driver-tmb64" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-csi--node--driver--tmb64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--910621710e-k8s-csi--node--driver--tmb64-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9cad5d59-f409-4303-a7b2-86efde3e9bab", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-910621710e", ContainerID:"e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751", Pod:"csi-node-driver-tmb64", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.118.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic51106b393f", MAC:"fe:40:08:df:f4:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:20.472660 containerd[1871]: 2025-05-27 17:05:20.401 [INFO][5349] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751" Namespace="calico-system" Pod="csi-node-driver-tmb64" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-csi--node--driver--tmb64-eth0" May 27 17:05:20.482270 systemd-networkd[1692]: calife07e824c4c: Link UP May 27 17:05:20.484472 systemd-networkd[1692]: calife07e824c4c: Gained carrier May 27 17:05:20.506556 containerd[1871]: 2025-05-27 17:05:20.278 [INFO][5359] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--910621710e-k8s-calico--kube--controllers--58544bdc56--m6766-eth0 calico-kube-controllers-58544bdc56- calico-system 0f0ca896-7cec-4519-bde0-47ca41dbc403 842 0 2025-05-27 17:04:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58544bdc56 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344.0.0-a-910621710e calico-kube-controllers-58544bdc56-m6766 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calife07e824c4c [] [] }} ContainerID="dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb" Namespace="calico-system" Pod="calico-kube-controllers-58544bdc56-m6766" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--kube--controllers--58544bdc56--m6766-" May 27 17:05:20.506556 containerd[1871]: 2025-05-27 17:05:20.280 [INFO][5359] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb" Namespace="calico-system" Pod="calico-kube-controllers-58544bdc56-m6766" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--kube--controllers--58544bdc56--m6766-eth0" May 27 17:05:20.506556 containerd[1871]: 2025-05-27 17:05:20.345 [INFO][5397] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb" HandleID="k8s-pod-network.dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb" Workload="ci--4344.0.0--a--910621710e-k8s-calico--kube--controllers--58544bdc56--m6766-eth0" May 27 17:05:20.506556 containerd[1871]: 2025-05-27 17:05:20.345 [INFO][5397] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb" HandleID="k8s-pod-network.dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb" Workload="ci--4344.0.0--a--910621710e-k8s-calico--kube--controllers--58544bdc56--m6766-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cd640), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.0.0-a-910621710e", "pod":"calico-kube-controllers-58544bdc56-m6766", "timestamp":"2025-05-27 17:05:20.345037253 +0000 UTC"}, Hostname:"ci-4344.0.0-a-910621710e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:05:20.506556 containerd[1871]: 2025-05-27 17:05:20.345 [INFO][5397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:05:20.506556 containerd[1871]: 2025-05-27 17:05:20.375 [INFO][5397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:05:20.506556 containerd[1871]: 2025-05-27 17:05:20.375 [INFO][5397] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-910621710e' May 27 17:05:20.506556 containerd[1871]: 2025-05-27 17:05:20.427 [INFO][5397] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb" host="ci-4344.0.0-a-910621710e" May 27 17:05:20.506556 containerd[1871]: 2025-05-27 17:05:20.439 [INFO][5397] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-910621710e" May 27 17:05:20.506556 containerd[1871]: 2025-05-27 17:05:20.449 [INFO][5397] ipam/ipam.go 511: Trying affinity for 192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:20.506556 containerd[1871]: 2025-05-27 17:05:20.451 [INFO][5397] ipam/ipam.go 158: Attempting to load block cidr=192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:20.506556 containerd[1871]: 2025-05-27 17:05:20.454 [INFO][5397] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:20.506556 containerd[1871]: 2025-05-27 17:05:20.454 [INFO][5397] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.118.128/26 handle="k8s-pod-network.dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb" host="ci-4344.0.0-a-910621710e" May 27 17:05:20.506556 containerd[1871]: 2025-05-27 17:05:20.455 [INFO][5397] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb May 27 17:05:20.506556 containerd[1871]: 2025-05-27 17:05:20.461 [INFO][5397] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.118.128/26 handle="k8s-pod-network.dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb" host="ci-4344.0.0-a-910621710e" May 27 17:05:20.506556 containerd[1871]: 2025-05-27 17:05:20.473 [INFO][5397] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.118.135/26] block=192.168.118.128/26 handle="k8s-pod-network.dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb" host="ci-4344.0.0-a-910621710e" May 27 17:05:20.506556 containerd[1871]: 2025-05-27 17:05:20.474 [INFO][5397] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.118.135/26] handle="k8s-pod-network.dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb" host="ci-4344.0.0-a-910621710e" May 27 17:05:20.506556 containerd[1871]: 2025-05-27 17:05:20.474 [INFO][5397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:05:20.506556 containerd[1871]: 2025-05-27 17:05:20.474 [INFO][5397] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.118.135/26] IPv6=[] ContainerID="dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb" HandleID="k8s-pod-network.dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb" Workload="ci--4344.0.0--a--910621710e-k8s-calico--kube--controllers--58544bdc56--m6766-eth0" May 27 17:05:20.507795 containerd[1871]: 2025-05-27 17:05:20.476 [INFO][5359] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb" Namespace="calico-system" Pod="calico-kube-controllers-58544bdc56-m6766" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--kube--controllers--58544bdc56--m6766-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--910621710e-k8s-calico--kube--controllers--58544bdc56--m6766-eth0", GenerateName:"calico-kube-controllers-58544bdc56-", Namespace:"calico-system", SelfLink:"", UID:"0f0ca896-7cec-4519-bde0-47ca41dbc403", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58544bdc56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-910621710e", ContainerID:"", Pod:"calico-kube-controllers-58544bdc56-m6766", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.118.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calife07e824c4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:20.507795 containerd[1871]: 2025-05-27 17:05:20.476 [INFO][5359] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.135/32] ContainerID="dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb" Namespace="calico-system" Pod="calico-kube-controllers-58544bdc56-m6766" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--kube--controllers--58544bdc56--m6766-eth0" May 27 17:05:20.507795 containerd[1871]: 2025-05-27 17:05:20.476 [INFO][5359] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife07e824c4c ContainerID="dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb" Namespace="calico-system" Pod="calico-kube-controllers-58544bdc56-m6766" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--kube--controllers--58544bdc56--m6766-eth0" May 27 17:05:20.507795 containerd[1871]: 2025-05-27 17:05:20.483 [INFO][5359] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb" Namespace="calico-system" Pod="calico-kube-controllers-58544bdc56-m6766" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--kube--controllers--58544bdc56--m6766-eth0" May 27 17:05:20.507795 containerd[1871]: 2025-05-27 17:05:20.485 [INFO][5359] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb" Namespace="calico-system" Pod="calico-kube-controllers-58544bdc56-m6766" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--kube--controllers--58544bdc56--m6766-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--910621710e-k8s-calico--kube--controllers--58544bdc56--m6766-eth0", GenerateName:"calico-kube-controllers-58544bdc56-", Namespace:"calico-system", SelfLink:"", UID:"0f0ca896-7cec-4519-bde0-47ca41dbc403", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58544bdc56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-910621710e", ContainerID:"dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb", Pod:"calico-kube-controllers-58544bdc56-m6766", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.118.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calife07e824c4c", MAC:"32:2d:28:2b:ae:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:20.507795 containerd[1871]: 2025-05-27 17:05:20.504 [INFO][5359] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb" Namespace="calico-system" Pod="calico-kube-controllers-58544bdc56-m6766" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-calico--kube--controllers--58544bdc56--m6766-eth0" May 27 17:05:20.586472 systemd-networkd[1692]: calif74cfae19d5: Link UP May 27 17:05:20.586731 systemd-networkd[1692]: calif74cfae19d5: Gained carrier May 27 17:05:20.601640 containerd[1871]: time="2025-05-27T17:05:20.601585913Z" level=info msg="connecting to shim e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751" address="unix:///run/containerd/s/81382238319e26c2031d26d508fb64dea5acc59b3b8a345cf532fc80a53ab792" namespace=k8s.io protocol=ttrpc version=3 May 27 17:05:20.618502 containerd[1871]: 2025-05-27 17:05:20.320 [INFO][5376] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--r6nxw-eth0 coredns-674b8bbfcf- kube-system 9e4be9e5-106a-4380-9190-17d1402c83e7 841 0 2025-05-27 17:04:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.0.0-a-910621710e coredns-674b8bbfcf-r6nxw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif74cfae19d5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c" Namespace="kube-system" Pod="coredns-674b8bbfcf-r6nxw" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--r6nxw-" May 27 17:05:20.618502 containerd[1871]: 2025-05-27 17:05:20.320 [INFO][5376] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c" Namespace="kube-system" Pod="coredns-674b8bbfcf-r6nxw" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--r6nxw-eth0" May 27 17:05:20.618502 containerd[1871]: 2025-05-27 17:05:20.354 [INFO][5405] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c" HandleID="k8s-pod-network.5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c" Workload="ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--r6nxw-eth0" May 27 17:05:20.618502 containerd[1871]: 2025-05-27 17:05:20.354 [INFO][5405] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c" HandleID="k8s-pod-network.5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c" Workload="ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--r6nxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d7630), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.0.0-a-910621710e", "pod":"coredns-674b8bbfcf-r6nxw", "timestamp":"2025-05-27 17:05:20.354176188 +0000 UTC"}, Hostname:"ci-4344.0.0-a-910621710e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 17:05:20.618502 containerd[1871]: 2025-05-27 17:05:20.354 [INFO][5405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 17:05:20.618502 containerd[1871]: 2025-05-27 17:05:20.474 [INFO][5405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 17:05:20.618502 containerd[1871]: 2025-05-27 17:05:20.474 [INFO][5405] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.0.0-a-910621710e' May 27 17:05:20.618502 containerd[1871]: 2025-05-27 17:05:20.527 [INFO][5405] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c" host="ci-4344.0.0-a-910621710e" May 27 17:05:20.618502 containerd[1871]: 2025-05-27 17:05:20.539 [INFO][5405] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.0.0-a-910621710e" May 27 17:05:20.618502 containerd[1871]: 2025-05-27 17:05:20.551 [INFO][5405] ipam/ipam.go 511: Trying affinity for 192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:20.618502 containerd[1871]: 2025-05-27 17:05:20.554 [INFO][5405] ipam/ipam.go 158: Attempting to load block cidr=192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:20.618502 containerd[1871]: 2025-05-27 17:05:20.557 [INFO][5405] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.118.128/26 host="ci-4344.0.0-a-910621710e" May 27 17:05:20.618502 containerd[1871]: 2025-05-27 17:05:20.557 [INFO][5405] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.118.128/26 handle="k8s-pod-network.5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c" host="ci-4344.0.0-a-910621710e" May 27 17:05:20.618502 containerd[1871]: 2025-05-27 17:05:20.559 [INFO][5405] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c May 27 17:05:20.618502 containerd[1871]: 2025-05-27 17:05:20.563 [INFO][5405] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.118.128/26 handle="k8s-pod-network.5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c" host="ci-4344.0.0-a-910621710e" May 27 17:05:20.618502 containerd[1871]: 2025-05-27 17:05:20.574 [INFO][5405] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.118.136/26] block=192.168.118.128/26 handle="k8s-pod-network.5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c" host="ci-4344.0.0-a-910621710e" May 27 17:05:20.618502 containerd[1871]: 2025-05-27 17:05:20.575 [INFO][5405] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.118.136/26] handle="k8s-pod-network.5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c" host="ci-4344.0.0-a-910621710e" May 27 17:05:20.618502 containerd[1871]: 2025-05-27 17:05:20.575 [INFO][5405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 17:05:20.618502 containerd[1871]: 2025-05-27 17:05:20.575 [INFO][5405] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.118.136/26] IPv6=[] ContainerID="5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c" HandleID="k8s-pod-network.5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c" Workload="ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--r6nxw-eth0" May 27 17:05:20.620097 containerd[1871]: 2025-05-27 17:05:20.577 [INFO][5376] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c" Namespace="kube-system" Pod="coredns-674b8bbfcf-r6nxw" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--r6nxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--r6nxw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9e4be9e5-106a-4380-9190-17d1402c83e7", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-910621710e", ContainerID:"", Pod:"coredns-674b8bbfcf-r6nxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif74cfae19d5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:20.620097 containerd[1871]: 2025-05-27 17:05:20.577 [INFO][5376] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.136/32] ContainerID="5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c" Namespace="kube-system" Pod="coredns-674b8bbfcf-r6nxw" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--r6nxw-eth0" May 27 17:05:20.620097 containerd[1871]: 2025-05-27 17:05:20.577 [INFO][5376] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif74cfae19d5 ContainerID="5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c" Namespace="kube-system" Pod="coredns-674b8bbfcf-r6nxw" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--r6nxw-eth0" May 27 17:05:20.620097 containerd[1871]: 2025-05-27 17:05:20.589 [INFO][5376] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c" Namespace="kube-system" Pod="coredns-674b8bbfcf-r6nxw" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--r6nxw-eth0" May 27 17:05:20.620097 containerd[1871]: 2025-05-27 17:05:20.592 [INFO][5376] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c" Namespace="kube-system" Pod="coredns-674b8bbfcf-r6nxw" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--r6nxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--r6nxw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9e4be9e5-106a-4380-9190-17d1402c83e7", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 17, 4, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.0.0-a-910621710e", ContainerID:"5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c", Pod:"coredns-674b8bbfcf-r6nxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif74cfae19d5", MAC:"d6:63:a7:2f:2b:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 17:05:20.620097 containerd[1871]: 2025-05-27 17:05:20.611 [INFO][5376] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c" Namespace="kube-system" Pod="coredns-674b8bbfcf-r6nxw" WorkloadEndpoint="ci--4344.0.0--a--910621710e-k8s-coredns--674b8bbfcf--r6nxw-eth0" May 27 17:05:20.641600 containerd[1871]: time="2025-05-27T17:05:20.641252784Z" level=info msg="connecting to shim dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb" address="unix:///run/containerd/s/c1e09d217817eea122c164461ecaf9ca9b39fc82b3bf5942f6f4bad341fa3b51" namespace=k8s.io protocol=ttrpc version=3 May 27 17:05:20.644221 systemd[1]: Started cri-containerd-e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751.scope - libcontainer container e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751. May 27 17:05:20.672504 systemd[1]: Started cri-containerd-dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb.scope - libcontainer container dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb. May 27 17:05:20.688028 containerd[1871]: time="2025-05-27T17:05:20.687532358Z" level=info msg="connecting to shim 5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c" address="unix:///run/containerd/s/25f3a137cc9a57853f563344e3c65f388ef523ba4631e36ae25bf7b64d77d0a8" namespace=k8s.io protocol=ttrpc version=3 May 27 17:05:20.701087 containerd[1871]: time="2025-05-27T17:05:20.700619365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tmb64,Uid:9cad5d59-f409-4303-a7b2-86efde3e9bab,Namespace:calico-system,Attempt:0,} returns sandbox id \"e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751\"" May 27 17:05:20.708390 containerd[1871]: time="2025-05-27T17:05:20.708211013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 17:05:20.729375 systemd[1]: Started cri-containerd-5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c.scope - libcontainer container 5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c. May 27 17:05:20.736832 containerd[1871]: time="2025-05-27T17:05:20.736787699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58544bdc56-m6766,Uid:0f0ca896-7cec-4519-bde0-47ca41dbc403,Namespace:calico-system,Attempt:0,} returns sandbox id \"dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb\"" May 27 17:05:20.773372 containerd[1871]: time="2025-05-27T17:05:20.773249426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r6nxw,Uid:9e4be9e5-106a-4380-9190-17d1402c83e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c\"" May 27 17:05:20.786130 containerd[1871]: time="2025-05-27T17:05:20.786084673Z" level=info msg="CreateContainer within sandbox \"5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:05:20.829335 containerd[1871]: time="2025-05-27T17:05:20.829269903Z" level=info msg="Container 5968b40e8ddbaff454a9048ec3ac126154459f4e9c575b0e34d695e9fe053eb7: CDI devices from CRI Config.CDIDevices: []" May 27 17:05:20.845560 containerd[1871]: time="2025-05-27T17:05:20.845464095Z" level=info msg="CreateContainer within sandbox \"5a3d0a53d298bf658009e899ba6697296c7a1160cc26effc583eb94f7e139a1c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5968b40e8ddbaff454a9048ec3ac126154459f4e9c575b0e34d695e9fe053eb7\"" May 27 17:05:20.847578 containerd[1871]: time="2025-05-27T17:05:20.846778535Z" level=info msg="StartContainer for \"5968b40e8ddbaff454a9048ec3ac126154459f4e9c575b0e34d695e9fe053eb7\"" May 27 17:05:20.849127 containerd[1871]: time="2025-05-27T17:05:20.848974175Z" level=info msg="connecting to shim 5968b40e8ddbaff454a9048ec3ac126154459f4e9c575b0e34d695e9fe053eb7" address="unix:///run/containerd/s/25f3a137cc9a57853f563344e3c65f388ef523ba4631e36ae25bf7b64d77d0a8" protocol=ttrpc version=3 May 27 17:05:20.871204 systemd[1]: Started cri-containerd-5968b40e8ddbaff454a9048ec3ac126154459f4e9c575b0e34d695e9fe053eb7.scope - libcontainer container 5968b40e8ddbaff454a9048ec3ac126154459f4e9c575b0e34d695e9fe053eb7. May 27 17:05:20.908572 containerd[1871]: time="2025-05-27T17:05:20.908458228Z" level=info msg="StartContainer for \"5968b40e8ddbaff454a9048ec3ac126154459f4e9c575b0e34d695e9fe053eb7\" returns successfully" May 27 17:05:21.200738 update_engine[1852]: I20250527 17:05:21.200107 1852 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 17:05:21.200738 update_engine[1852]: I20250527 17:05:21.200347 1852 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 17:05:21.200738 update_engine[1852]: I20250527 17:05:21.200595 1852 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 17:05:21.250258 update_engine[1852]: E20250527 17:05:21.250191 1852 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 17:05:21.250770 update_engine[1852]: I20250527 17:05:21.250465 1852 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 27 17:05:21.250770 update_engine[1852]: I20250527 17:05:21.250494 1852 omaha_request_action.cc:617] Omaha request response: May 27 17:05:21.250770 update_engine[1852]: E20250527 17:05:21.250602 1852 omaha_request_action.cc:636] Omaha request network transfer failed. May 27 17:05:21.250770 update_engine[1852]: I20250527 17:05:21.250619 1852 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 27 17:05:21.250770 update_engine[1852]: I20250527 17:05:21.250623 1852 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 17:05:21.250770 update_engine[1852]: I20250527 17:05:21.250627 1852 update_attempter.cc:306] Processing Done. May 27 17:05:21.250770 update_engine[1852]: E20250527 17:05:21.250641 1852 update_attempter.cc:619] Update failed. May 27 17:05:21.250770 update_engine[1852]: I20250527 17:05:21.250644 1852 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 27 17:05:21.250770 update_engine[1852]: I20250527 17:05:21.250648 1852 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 27 17:05:21.250770 update_engine[1852]: I20250527 17:05:21.250651 1852 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 27 17:05:21.251704 update_engine[1852]: I20250527 17:05:21.251185 1852 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 27 17:05:21.251704 update_engine[1852]: I20250527 17:05:21.251233 1852 omaha_request_action.cc:271] Posting an Omaha request to disabled May 27 17:05:21.251704 update_engine[1852]: I20250527 17:05:21.251237 1852 omaha_request_action.cc:272] Request: May 27 17:05:21.251704 update_engine[1852]: May 27 17:05:21.251704 update_engine[1852]: May 27 17:05:21.251704 update_engine[1852]: May 27 17:05:21.251704 update_engine[1852]: May 27 17:05:21.251704 update_engine[1852]: May 27 17:05:21.251704 update_engine[1852]: May 27 17:05:21.251704 update_engine[1852]: I20250527 17:05:21.251242 1852 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 17:05:21.251704 update_engine[1852]: I20250527 17:05:21.251401 1852 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 17:05:21.251704 update_engine[1852]: I20250527 17:05:21.251663 1852 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 17:05:21.251923 locksmithd[2005]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 27 17:05:21.283234 update_engine[1852]: E20250527 17:05:21.283031 1852 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 17:05:21.283234 update_engine[1852]: I20250527 17:05:21.283121 1852 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 27 17:05:21.283234 update_engine[1852]: I20250527 17:05:21.283128 1852 omaha_request_action.cc:617] Omaha request response: May 27 17:05:21.283234 update_engine[1852]: I20250527 17:05:21.283133 1852 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 17:05:21.283234 update_engine[1852]: I20250527 17:05:21.283136 1852 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 17:05:21.283234 update_engine[1852]: I20250527 17:05:21.283140 1852 update_attempter.cc:306] Processing Done. May 27 17:05:21.283234 update_engine[1852]: I20250527 17:05:21.283145 1852 update_attempter.cc:310] Error event sent. May 27 17:05:21.283234 update_engine[1852]: I20250527 17:05:21.283155 1852 update_check_scheduler.cc:74] Next update check in 47m20s May 27 17:05:21.283800 locksmithd[2005]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 27 17:05:21.423580 kubelet[3382]: I0527 17:05:21.423510 3382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-r6nxw" podStartSLOduration=55.423496038 podStartE2EDuration="55.423496038s" podCreationTimestamp="2025-05-27 17:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:05:21.408742454 +0000 UTC m=+59.428256691" watchObservedRunningTime="2025-05-27 17:05:21.423496038 +0000 UTC m=+59.443010275" May 27 17:05:21.887136 systemd-networkd[1692]: calic51106b393f: Gained IPv6LL May 27 17:05:22.114305 containerd[1871]: time="2025-05-27T17:05:22.114245256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:22.119901 containerd[1871]: time="2025-05-27T17:05:22.119032231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8226240" May 27 17:05:22.124853 containerd[1871]: time="2025-05-27T17:05:22.124810239Z" level=info msg="ImageCreate event name:\"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:22.133102 containerd[1871]: time="2025-05-27T17:05:22.133024903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:22.133768 containerd[1871]: time="2025-05-27T17:05:22.133437479Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"9595481\" in 1.425183058s" May 27 17:05:22.133768 containerd[1871]: time="2025-05-27T17:05:22.133471479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\"" May 27 17:05:22.137971 containerd[1871]: time="2025-05-27T17:05:22.137654071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 17:05:22.146136 containerd[1871]: time="2025-05-27T17:05:22.146091542Z" level=info msg="CreateContainer within sandbox \"e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 17:05:22.183881 containerd[1871]: time="2025-05-27T17:05:22.183081781Z" level=info msg="Container 32761462a4fdfab69fe48e0d4541db5d500afece979c1eaf1ec02e84579d63f7: CDI devices from CRI Config.CDIDevices: []" May 27 17:05:22.212819 containerd[1871]: time="2025-05-27T17:05:22.212768571Z" level=info msg="CreateContainer within sandbox \"e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"32761462a4fdfab69fe48e0d4541db5d500afece979c1eaf1ec02e84579d63f7\"" May 27 17:05:22.214403 containerd[1871]: time="2025-05-27T17:05:22.214356459Z" level=info msg="StartContainer for \"32761462a4fdfab69fe48e0d4541db5d500afece979c1eaf1ec02e84579d63f7\"" May 27 17:05:22.217054 containerd[1871]: time="2025-05-27T17:05:22.216978307Z" level=info msg="connecting to shim 32761462a4fdfab69fe48e0d4541db5d500afece979c1eaf1ec02e84579d63f7" address="unix:///run/containerd/s/81382238319e26c2031d26d508fb64dea5acc59b3b8a345cf532fc80a53ab792" protocol=ttrpc version=3 May 27 17:05:22.239206 systemd[1]: Started cri-containerd-32761462a4fdfab69fe48e0d4541db5d500afece979c1eaf1ec02e84579d63f7.scope - libcontainer container 32761462a4fdfab69fe48e0d4541db5d500afece979c1eaf1ec02e84579d63f7. May 27 17:05:22.271599 systemd-networkd[1692]: calife07e824c4c: Gained IPv6LL May 27 17:05:22.279480 containerd[1871]: time="2025-05-27T17:05:22.279435240Z" level=info msg="StartContainer for \"32761462a4fdfab69fe48e0d4541db5d500afece979c1eaf1ec02e84579d63f7\" returns successfully" May 27 17:05:22.527125 systemd-networkd[1692]: calif74cfae19d5: Gained IPv6LL May 27 17:05:24.598119 containerd[1871]: time="2025-05-27T17:05:24.598061195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:24.601524 containerd[1871]: time="2025-05-27T17:05:24.601466963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=48045219" May 27 17:05:24.607424 containerd[1871]: time="2025-05-27T17:05:24.607340810Z" level=info msg="ImageCreate event name:\"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:24.618288 containerd[1871]: time="2025-05-27T17:05:24.618210370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:24.618956 containerd[1871]: time="2025-05-27T17:05:24.618744074Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"49414428\" in 2.479692499s" May 27 17:05:24.618956 containerd[1871]: time="2025-05-27T17:05:24.618784970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\"" May 27 17:05:24.620651 containerd[1871]: time="2025-05-27T17:05:24.620617698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 17:05:24.639143 containerd[1871]: time="2025-05-27T17:05:24.638965817Z" level=info msg="CreateContainer within sandbox \"dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 17:05:24.677016 containerd[1871]: time="2025-05-27T17:05:24.675715015Z" level=info msg="Container 99c2aa99d4f453515a2b01741e996626902ce40a1e8157419276abbe06d741c9: CDI devices from CRI Config.CDIDevices: []" May 27 17:05:24.679496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3890003200.mount: Deactivated successfully. May 27 17:05:24.716433 containerd[1871]: time="2025-05-27T17:05:24.716345934Z" level=info msg="CreateContainer within sandbox \"dbf3d5e06dc6e3e9de68a620b2880daadca0c0d702970d1970193f509034f7eb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"99c2aa99d4f453515a2b01741e996626902ce40a1e8157419276abbe06d741c9\"" May 27 17:05:24.718162 containerd[1871]: time="2025-05-27T17:05:24.718100109Z" level=info msg="StartContainer for \"99c2aa99d4f453515a2b01741e996626902ce40a1e8157419276abbe06d741c9\"" May 27 17:05:24.719235 containerd[1871]: time="2025-05-27T17:05:24.719201349Z" level=info msg="connecting to shim 99c2aa99d4f453515a2b01741e996626902ce40a1e8157419276abbe06d741c9" address="unix:///run/containerd/s/c1e09d217817eea122c164461ecaf9ca9b39fc82b3bf5942f6f4bad341fa3b51" protocol=ttrpc version=3 May 27 17:05:24.742234 systemd[1]: Started cri-containerd-99c2aa99d4f453515a2b01741e996626902ce40a1e8157419276abbe06d741c9.scope - libcontainer container 99c2aa99d4f453515a2b01741e996626902ce40a1e8157419276abbe06d741c9. May 27 17:05:24.790030 containerd[1871]: time="2025-05-27T17:05:24.789510562Z" level=info msg="StartContainer for \"99c2aa99d4f453515a2b01741e996626902ce40a1e8157419276abbe06d741c9\" returns successfully" May 27 17:05:25.445833 containerd[1871]: time="2025-05-27T17:05:25.445779294Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99c2aa99d4f453515a2b01741e996626902ce40a1e8157419276abbe06d741c9\" id:\"90f5b7a929e965b9a84ec71fc9438740e0925363c222c28c64782696ae0bd585\" pid:5711 exited_at:{seconds:1748365525 nanos:440461166}" May 27 17:05:25.467017 kubelet[3382]: I0527 17:05:25.466877 3382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58544bdc56-m6766" podStartSLOduration=43.585927286 podStartE2EDuration="47.466857205s" podCreationTimestamp="2025-05-27 17:04:38 +0000 UTC" firstStartedPulling="2025-05-27 17:05:20.738864707 +0000 UTC m=+58.758378952" lastFinishedPulling="2025-05-27 17:05:24.619794634 +0000 UTC m=+62.639308871" observedRunningTime="2025-05-27 17:05:25.430818134 +0000 UTC m=+63.450332507" watchObservedRunningTime="2025-05-27 17:05:25.466857205 +0000 UTC m=+63.486371442" May 27 17:05:26.120838 containerd[1871]: time="2025-05-27T17:05:26.120778016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:26.125495 containerd[1871]: time="2025-05-27T17:05:26.125441576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=13749925" May 27 17:05:26.135740 containerd[1871]: time="2025-05-27T17:05:26.135662071Z" level=info msg="ImageCreate event name:\"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:26.146083 containerd[1871]: time="2025-05-27T17:05:26.145985999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:05:26.146764 containerd[1871]: time="2025-05-27T17:05:26.146570439Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"15119118\" in 1.525915421s" May 27 17:05:26.146764 containerd[1871]: time="2025-05-27T17:05:26.146601671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\"" May 27 17:05:26.165634 containerd[1871]: time="2025-05-27T17:05:26.165584614Z" level=info msg="CreateContainer within sandbox \"e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 17:05:26.206393 containerd[1871]: time="2025-05-27T17:05:26.206338332Z" level=info msg="Container 7233a830ace8aa81faa4d2a92fcf12cfe81b9218f6414bfd3e4c062bbe4428a1: CDI devices from CRI Config.CDIDevices: []" May 27 17:05:26.210617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2010872804.mount: Deactivated successfully. May 27 17:05:26.230859 containerd[1871]: time="2025-05-27T17:05:26.230796875Z" level=info msg="CreateContainer within sandbox \"e5029953ac0a1661a6412a153f73e5e1095e60d1a4d43e093fddaf81b5867751\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7233a830ace8aa81faa4d2a92fcf12cfe81b9218f6414bfd3e4c062bbe4428a1\"" May 27 17:05:26.231894 containerd[1871]: time="2025-05-27T17:05:26.231793867Z" level=info msg="StartContainer for \"7233a830ace8aa81faa4d2a92fcf12cfe81b9218f6414bfd3e4c062bbe4428a1\"" May 27 17:05:26.233314 containerd[1871]: time="2025-05-27T17:05:26.233276507Z" level=info msg="connecting to shim 7233a830ace8aa81faa4d2a92fcf12cfe81b9218f6414bfd3e4c062bbe4428a1" address="unix:///run/containerd/s/81382238319e26c2031d26d508fb64dea5acc59b3b8a345cf532fc80a53ab792" protocol=ttrpc version=3 May 27 17:05:26.284233 systemd[1]: Started cri-containerd-7233a830ace8aa81faa4d2a92fcf12cfe81b9218f6414bfd3e4c062bbe4428a1.scope - libcontainer container 7233a830ace8aa81faa4d2a92fcf12cfe81b9218f6414bfd3e4c062bbe4428a1. May 27 17:05:26.499145 containerd[1871]: time="2025-05-27T17:05:26.498918144Z" level=info msg="StartContainer for \"7233a830ace8aa81faa4d2a92fcf12cfe81b9218f6414bfd3e4c062bbe4428a1\" returns successfully" May 27 17:05:27.162201 containerd[1871]: time="2025-05-27T17:05:27.161875439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:05:27.236577 kubelet[3382]: I0527 17:05:27.236517 3382 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 17:05:27.239663 kubelet[3382]: I0527 17:05:27.239602 3382 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 17:05:27.387383 containerd[1871]: time="2025-05-27T17:05:27.387178414Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:05:27.392164 containerd[1871]: time="2025-05-27T17:05:27.392035812Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:05:27.392164 containerd[1871]: time="2025-05-27T17:05:27.392078940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:05:27.392542 kubelet[3382]: E0527 17:05:27.392505 3382 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:05:27.392715 kubelet[3382]: E0527 17:05:27.392697 3382 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:05:27.393027 kubelet[3382]: E0527 17:05:27.392909 3382 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:12ab7f8fc8ea4181b794bd8076014a03,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kk55q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f77999cb5-qpzlk_calico-system(2501c128-3fe4-46d7-bc55-201cf393d5e1): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:05:27.396235 containerd[1871]: time="2025-05-27T17:05:27.396207953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:05:27.563755 containerd[1871]: time="2025-05-27T17:05:27.563610086Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:05:27.569338 containerd[1871]: time="2025-05-27T17:05:27.569216324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:05:27.569638 containerd[1871]: time="2025-05-27T17:05:27.569245740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:05:27.569862 kubelet[3382]: E0527 17:05:27.569814 3382 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:05:27.569950 kubelet[3382]: E0527 17:05:27.569868 3382 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:05:27.570029 kubelet[3382]: E0527 17:05:27.569985 3382 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kk55q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f77999cb5-qpzlk_calico-system(2501c128-3fe4-46d7-bc55-201cf393d5e1): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:05:27.571207 kubelet[3382]: E0527 17:05:27.571156 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f77999cb5-qpzlk" podUID="2501c128-3fe4-46d7-bc55-201cf393d5e1" May 27 17:05:28.164022 containerd[1871]: time="2025-05-27T17:05:28.163849757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:05:28.190408 kubelet[3382]: I0527 17:05:28.190311 3382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tmb64" podStartSLOduration=44.746119041 podStartE2EDuration="50.190288699s" podCreationTimestamp="2025-05-27 17:04:38 +0000 UTC" firstStartedPulling="2025-05-27 17:05:20.703440861 +0000 UTC m=+58.722955106" lastFinishedPulling="2025-05-27 17:05:26.147610519 +0000 UTC m=+64.167124764" observedRunningTime="2025-05-27 17:05:27.517445762 +0000 UTC m=+65.536959999" watchObservedRunningTime="2025-05-27 17:05:28.190288699 +0000 UTC m=+66.209803000" May 27 17:05:28.367977 containerd[1871]: time="2025-05-27T17:05:28.367770979Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:05:28.379362 containerd[1871]: time="2025-05-27T17:05:28.379227216Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:05:28.379362 containerd[1871]: time="2025-05-27T17:05:28.379266160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:05:28.379573 kubelet[3382]: E0527 17:05:28.379473 3382 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:05:28.379573 kubelet[3382]: E0527 17:05:28.379533 3382 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:05:28.379875 kubelet[3382]: E0527 17:05:28.379650 3382 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lprff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-2hnhp_calico-system(1ed3902b-f80a-4d5d-a3a8-c425965d5219): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:05:28.381209 kubelet[3382]: E0527 17:05:28.381148 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-2hnhp" podUID="1ed3902b-f80a-4d5d-a3a8-c425965d5219" May 27 17:05:36.413969 containerd[1871]: time="2025-05-27T17:05:36.413927034Z" level=info msg="TaskExit event in podsandbox handler container_id:\"319ebaf0c1336151458e2dd1743b02a13cff1f6ebf5da73c6b194e0546c2693f\" id:\"6ad141192d8f8ccad07b90a3778f307b4ec022db0e2d99541e54ac3bcd960dbc\" pid:5783 exited_at:{seconds:1748365536 nanos:413633323}" May 27 17:05:38.028196 containerd[1871]: time="2025-05-27T17:05:38.028093195Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99c2aa99d4f453515a2b01741e996626902ce40a1e8157419276abbe06d741c9\" id:\"d0d31fc16d778ee05a769b4f8dfa8fce024bf0572d8a48599311396a5b102c49\" pid:5808 exited_at:{seconds:1748365538 nanos:27839211}" May 27 17:05:39.160880 kubelet[3382]: E0527 17:05:39.160806 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-2hnhp" podUID="1ed3902b-f80a-4d5d-a3a8-c425965d5219" May 27 17:05:40.165409 kubelet[3382]: E0527 17:05:40.165284 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f77999cb5-qpzlk" podUID="2501c128-3fe4-46d7-bc55-201cf393d5e1" May 27 17:05:50.161882 containerd[1871]: time="2025-05-27T17:05:50.161817779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:05:50.421468 containerd[1871]: time="2025-05-27T17:05:50.421191296Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:05:50.425273 containerd[1871]: time="2025-05-27T17:05:50.425178848Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:05:50.425598 containerd[1871]: time="2025-05-27T17:05:50.425218664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:05:50.425816 kubelet[3382]: E0527 17:05:50.425774 3382 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:05:50.426243 kubelet[3382]: E0527 17:05:50.425831 3382 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:05:50.426243 kubelet[3382]: E0527 17:05:50.425956 3382 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lprff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-2hnhp_calico-system(1ed3902b-f80a-4d5d-a3a8-c425965d5219): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:05:50.427166 kubelet[3382]: E0527 17:05:50.427131 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-2hnhp" podUID="1ed3902b-f80a-4d5d-a3a8-c425965d5219" May 27 17:05:53.162329 containerd[1871]: time="2025-05-27T17:05:53.162237342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:05:53.363292 containerd[1871]: time="2025-05-27T17:05:53.363098341Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:05:53.366107 containerd[1871]: time="2025-05-27T17:05:53.366034830Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:05:53.366369 containerd[1871]: time="2025-05-27T17:05:53.366065822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:05:53.366758 kubelet[3382]: E0527 17:05:53.366501 3382 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:05:53.366758 kubelet[3382]: E0527 17:05:53.366556 3382 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:05:53.366758 kubelet[3382]: E0527 17:05:53.366675 3382 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:12ab7f8fc8ea4181b794bd8076014a03,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kk55q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f77999cb5-qpzlk_calico-system(2501c128-3fe4-46d7-bc55-201cf393d5e1): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:05:53.370596 containerd[1871]: time="2025-05-27T17:05:53.370516887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:05:53.537141 containerd[1871]: time="2025-05-27T17:05:53.536236171Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:05:53.540457 containerd[1871]: time="2025-05-27T17:05:53.540384925Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:05:53.540690 containerd[1871]: time="2025-05-27T17:05:53.540420997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:05:53.540724 kubelet[3382]: E0527 17:05:53.540667 3382 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:05:53.540724 kubelet[3382]: E0527 17:05:53.540714 3382 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:05:53.541248 kubelet[3382]: E0527 17:05:53.541174 3382 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kk55q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f77999cb5-qpzlk_calico-system(2501c128-3fe4-46d7-bc55-201cf393d5e1): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:05:53.542478 kubelet[3382]: E0527 17:05:53.542389 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f77999cb5-qpzlk" podUID="2501c128-3fe4-46d7-bc55-201cf393d5e1" May 27 17:05:55.475689 containerd[1871]: time="2025-05-27T17:05:55.475546194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99c2aa99d4f453515a2b01741e996626902ce40a1e8157419276abbe06d741c9\" id:\"879a873ff897da79b77c8376d0748c34c713356ac2bde92f5404fdd61eefc0bd\" pid:5843 exited_at:{seconds:1748365555 nanos:474953746}" May 27 17:06:01.160694 kubelet[3382]: E0527 17:06:01.160644 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-2hnhp" podUID="1ed3902b-f80a-4d5d-a3a8-c425965d5219" May 27 17:06:04.162082 kubelet[3382]: E0527 17:06:04.161943 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f77999cb5-qpzlk" podUID="2501c128-3fe4-46d7-bc55-201cf393d5e1" May 27 17:06:06.508955 containerd[1871]: time="2025-05-27T17:06:06.508675137Z" level=info msg="TaskExit event in podsandbox handler container_id:\"319ebaf0c1336151458e2dd1743b02a13cff1f6ebf5da73c6b194e0546c2693f\" id:\"ffea36c784280bae704fe78032f58a6d5f67819bd93c6b8289ce41a048ed6f72\" pid:5869 exited_at:{seconds:1748365566 nanos:507617249}" May 27 17:06:12.161177 kubelet[3382]: E0527 17:06:12.160854 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-2hnhp" podUID="1ed3902b-f80a-4d5d-a3a8-c425965d5219" May 27 17:06:15.162383 kubelet[3382]: E0527 17:06:15.162198 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f77999cb5-qpzlk" podUID="2501c128-3fe4-46d7-bc55-201cf393d5e1" May 27 17:06:16.597246 systemd[1]: Started sshd@7-10.200.20.14:22-10.200.16.10:36284.service - OpenSSH per-connection server daemon (10.200.16.10:36284). May 27 17:06:17.053020 sshd[5887]: Accepted publickey for core from 10.200.16.10 port 36284 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:17.055513 sshd-session[5887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:17.060065 systemd-logind[1849]: New session 10 of user core. May 27 17:06:17.065257 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 17:06:17.589016 sshd[5889]: Connection closed by 10.200.16.10 port 36284 May 27 17:06:17.587501 sshd-session[5887]: pam_unix(sshd:session): session closed for user core May 27 17:06:17.592440 systemd-logind[1849]: Session 10 logged out. Waiting for processes to exit. May 27 17:06:17.592590 systemd[1]: sshd@7-10.200.20.14:22-10.200.16.10:36284.service: Deactivated successfully. May 27 17:06:17.595119 systemd[1]: session-10.scope: Deactivated successfully. May 27 17:06:17.599290 systemd-logind[1849]: Removed session 10. May 27 17:06:22.676376 systemd[1]: Started sshd@8-10.200.20.14:22-10.200.16.10:39738.service - OpenSSH per-connection server daemon (10.200.16.10:39738). May 27 17:06:23.170252 sshd[5905]: Accepted publickey for core from 10.200.16.10 port 39738 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:23.172419 sshd-session[5905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:23.177005 systemd-logind[1849]: New session 11 of user core. May 27 17:06:23.185146 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 17:06:23.582694 sshd[5907]: Connection closed by 10.200.16.10 port 39738 May 27 17:06:23.583253 sshd-session[5905]: pam_unix(sshd:session): session closed for user core May 27 17:06:23.587371 systemd[1]: sshd@8-10.200.20.14:22-10.200.16.10:39738.service: Deactivated successfully. May 27 17:06:23.590675 systemd[1]: session-11.scope: Deactivated successfully. May 27 17:06:23.592106 systemd-logind[1849]: Session 11 logged out. Waiting for processes to exit. May 27 17:06:23.594503 systemd-logind[1849]: Removed session 11. May 27 17:06:25.159983 kubelet[3382]: E0527 17:06:25.159695 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-2hnhp" podUID="1ed3902b-f80a-4d5d-a3a8-c425965d5219" May 27 17:06:25.439451 containerd[1871]: time="2025-05-27T17:06:25.439317679Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99c2aa99d4f453515a2b01741e996626902ce40a1e8157419276abbe06d741c9\" id:\"e67f30d2556ed333ac5740a0021deada62d8009987aadffda92a5ab6ca38b3cd\" pid:5931 exited_at:{seconds:1748365585 nanos:438976199}" May 27 17:06:27.160689 kubelet[3382]: E0527 17:06:27.160533 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f77999cb5-qpzlk" podUID="2501c128-3fe4-46d7-bc55-201cf393d5e1" May 27 17:06:28.681218 systemd[1]: Started sshd@9-10.200.20.14:22-10.200.16.10:39746.service - OpenSSH per-connection server daemon (10.200.16.10:39746). May 27 17:06:29.178013 sshd[5944]: Accepted publickey for core from 10.200.16.10 port 39746 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:29.179660 sshd-session[5944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:29.184245 systemd-logind[1849]: New session 12 of user core. May 27 17:06:29.191216 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 17:06:29.579322 sshd[5946]: Connection closed by 10.200.16.10 port 39746 May 27 17:06:29.580213 sshd-session[5944]: pam_unix(sshd:session): session closed for user core May 27 17:06:29.583874 systemd[1]: sshd@9-10.200.20.14:22-10.200.16.10:39746.service: Deactivated successfully. May 27 17:06:29.585670 systemd[1]: session-12.scope: Deactivated successfully. May 27 17:06:29.587652 systemd-logind[1849]: Session 12 logged out. Waiting for processes to exit. May 27 17:06:29.588843 systemd-logind[1849]: Removed session 12. May 27 17:06:29.670582 systemd[1]: Started sshd@10-10.200.20.14:22-10.200.16.10:48230.service - OpenSSH per-connection server daemon (10.200.16.10:48230). May 27 17:06:30.158836 sshd[5959]: Accepted publickey for core from 10.200.16.10 port 48230 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:30.160582 sshd-session[5959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:30.166044 systemd-logind[1849]: New session 13 of user core. May 27 17:06:30.171185 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 17:06:30.576196 sshd[5961]: Connection closed by 10.200.16.10 port 48230 May 27 17:06:30.575196 sshd-session[5959]: pam_unix(sshd:session): session closed for user core May 27 17:06:30.578715 systemd[1]: sshd@10-10.200.20.14:22-10.200.16.10:48230.service: Deactivated successfully. May 27 17:06:30.581764 systemd[1]: session-13.scope: Deactivated successfully. May 27 17:06:30.584192 systemd-logind[1849]: Session 13 logged out. Waiting for processes to exit. May 27 17:06:30.585792 systemd-logind[1849]: Removed session 13. May 27 17:06:30.663009 systemd[1]: Started sshd@11-10.200.20.14:22-10.200.16.10:48232.service - OpenSSH per-connection server daemon (10.200.16.10:48232). May 27 17:06:31.140021 sshd[5976]: Accepted publickey for core from 10.200.16.10 port 48232 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:31.141648 sshd-session[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:31.148984 systemd-logind[1849]: New session 14 of user core. May 27 17:06:31.153368 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 17:06:31.545898 sshd[5982]: Connection closed by 10.200.16.10 port 48232 May 27 17:06:31.545708 sshd-session[5976]: pam_unix(sshd:session): session closed for user core May 27 17:06:31.548921 systemd[1]: sshd@11-10.200.20.14:22-10.200.16.10:48232.service: Deactivated successfully. May 27 17:06:31.551689 systemd[1]: session-14.scope: Deactivated successfully. May 27 17:06:31.553806 systemd-logind[1849]: Session 14 logged out. Waiting for processes to exit. May 27 17:06:31.555534 systemd-logind[1849]: Removed session 14. May 27 17:06:36.163406 containerd[1871]: time="2025-05-27T17:06:36.163361524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 17:06:36.335400 containerd[1871]: time="2025-05-27T17:06:36.335202019Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:06:36.339178 containerd[1871]: time="2025-05-27T17:06:36.339042643Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:06:36.339178 containerd[1871]: time="2025-05-27T17:06:36.339084667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 17:06:36.339539 kubelet[3382]: E0527 17:06:36.339487 3382 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:06:36.339939 kubelet[3382]: E0527 17:06:36.339554 3382 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 17:06:36.339939 kubelet[3382]: E0527 17:06:36.339687 3382 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lprff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-2hnhp_calico-system(1ed3902b-f80a-4d5d-a3a8-c425965d5219): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:06:36.341406 kubelet[3382]: E0527 17:06:36.341345 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-2hnhp" podUID="1ed3902b-f80a-4d5d-a3a8-c425965d5219" May 27 17:06:36.418439 containerd[1871]: time="2025-05-27T17:06:36.417913194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"319ebaf0c1336151458e2dd1743b02a13cff1f6ebf5da73c6b194e0546c2693f\" id:\"f1bd6dd4ff7c2dd53335a7cf67939d7109687ec820f981f4f867c8237c764f09\" pid:6005 exited_at:{seconds:1748365596 nanos:417602234}" May 27 17:06:36.633273 systemd[1]: Started sshd@12-10.200.20.14:22-10.200.16.10:48234.service - OpenSSH per-connection server daemon (10.200.16.10:48234). May 27 17:06:37.093599 sshd[6017]: Accepted publickey for core from 10.200.16.10 port 48234 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:37.094943 sshd-session[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:37.099254 systemd-logind[1849]: New session 15 of user core. May 27 17:06:37.108316 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 17:06:37.492475 sshd[6019]: Connection closed by 10.200.16.10 port 48234 May 27 17:06:37.493092 sshd-session[6017]: pam_unix(sshd:session): session closed for user core May 27 17:06:37.496985 systemd[1]: sshd@12-10.200.20.14:22-10.200.16.10:48234.service: Deactivated successfully. May 27 17:06:37.499278 systemd[1]: session-15.scope: Deactivated successfully. May 27 17:06:37.500150 systemd-logind[1849]: Session 15 logged out. Waiting for processes to exit. May 27 17:06:37.502455 systemd-logind[1849]: Removed session 15. May 27 17:06:38.028244 containerd[1871]: time="2025-05-27T17:06:38.028202533Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99c2aa99d4f453515a2b01741e996626902ce40a1e8157419276abbe06d741c9\" id:\"db9356efebf3f7c0364320fad5208d6ab5d5eb633918f6fd02750520872aa204\" pid:6043 exited_at:{seconds:1748365598 nanos:27724349}" May 27 17:06:42.161445 containerd[1871]: time="2025-05-27T17:06:42.160871906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 17:06:42.316783 containerd[1871]: time="2025-05-27T17:06:42.316726048Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:06:42.322706 containerd[1871]: time="2025-05-27T17:06:42.322646128Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:06:42.322913 containerd[1871]: time="2025-05-27T17:06:42.322689392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 17:06:42.322956 kubelet[3382]: E0527 17:06:42.322911 3382 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:06:42.323333 kubelet[3382]: E0527 17:06:42.322962 3382 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 17:06:42.323556 kubelet[3382]: E0527 17:06:42.323520 3382 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:12ab7f8fc8ea4181b794bd8076014a03,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kk55q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f77999cb5-qpzlk_calico-system(2501c128-3fe4-46d7-bc55-201cf393d5e1): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:06:42.326169 containerd[1871]: time="2025-05-27T17:06:42.326131680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 17:06:42.525093 containerd[1871]: time="2025-05-27T17:06:42.524791468Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 17:06:42.528103 containerd[1871]: time="2025-05-27T17:06:42.527718204Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 17:06:42.528103 containerd[1871]: time="2025-05-27T17:06:42.527757892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 17:06:42.528569 kubelet[3382]: E0527 17:06:42.528520 3382 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:06:42.528642 kubelet[3382]: E0527 17:06:42.528581 3382 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 17:06:42.528753 kubelet[3382]: E0527 17:06:42.528693 3382 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kk55q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f77999cb5-qpzlk_calico-system(2501c128-3fe4-46d7-bc55-201cf393d5e1): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 17:06:42.529890 kubelet[3382]: E0527 17:06:42.529847 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f77999cb5-qpzlk" podUID="2501c128-3fe4-46d7-bc55-201cf393d5e1" May 27 17:06:42.580877 systemd[1]: Started sshd@13-10.200.20.14:22-10.200.16.10:54934.service - OpenSSH per-connection server daemon (10.200.16.10:54934). May 27 17:06:43.030789 sshd[6057]: Accepted publickey for core from 10.200.16.10 port 54934 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:43.032105 sshd-session[6057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:43.036280 systemd-logind[1849]: New session 16 of user core. May 27 17:06:43.041158 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 17:06:43.421233 sshd[6059]: Connection closed by 10.200.16.10 port 54934 May 27 17:06:43.421902 sshd-session[6057]: pam_unix(sshd:session): session closed for user core May 27 17:06:43.425404 systemd[1]: sshd@13-10.200.20.14:22-10.200.16.10:54934.service: Deactivated successfully. May 27 17:06:43.428808 systemd[1]: session-16.scope: Deactivated successfully. May 27 17:06:43.429758 systemd-logind[1849]: Session 16 logged out. Waiting for processes to exit. May 27 17:06:43.431208 systemd-logind[1849]: Removed session 16. May 27 17:06:48.510301 systemd[1]: Started sshd@14-10.200.20.14:22-10.200.16.10:54940.service - OpenSSH per-connection server daemon (10.200.16.10:54940). May 27 17:06:49.007847 sshd[6092]: Accepted publickey for core from 10.200.16.10 port 54940 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:49.009216 sshd-session[6092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:49.013951 systemd-logind[1849]: New session 17 of user core. May 27 17:06:49.018159 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 17:06:49.161895 kubelet[3382]: E0527 17:06:49.161098 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-2hnhp" podUID="1ed3902b-f80a-4d5d-a3a8-c425965d5219" May 27 17:06:49.404889 sshd[6094]: Connection closed by 10.200.16.10 port 54940 May 27 17:06:49.404778 sshd-session[6092]: pam_unix(sshd:session): session closed for user core May 27 17:06:49.410576 systemd[1]: sshd@14-10.200.20.14:22-10.200.16.10:54940.service: Deactivated successfully. May 27 17:06:49.411082 systemd-logind[1849]: Session 17 logged out. Waiting for processes to exit. May 27 17:06:49.412765 systemd[1]: session-17.scope: Deactivated successfully. May 27 17:06:49.415455 systemd-logind[1849]: Removed session 17. May 27 17:06:49.506331 systemd[1]: Started sshd@15-10.200.20.14:22-10.200.16.10:47736.service - OpenSSH per-connection server daemon (10.200.16.10:47736). May 27 17:06:50.003371 sshd[6106]: Accepted publickey for core from 10.200.16.10 port 47736 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:50.005603 sshd-session[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:50.012206 systemd-logind[1849]: New session 18 of user core. May 27 17:06:50.016234 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 17:06:50.504037 sshd[6108]: Connection closed by 10.200.16.10 port 47736 May 27 17:06:50.504976 sshd-session[6106]: pam_unix(sshd:session): session closed for user core May 27 17:06:50.509144 systemd[1]: sshd@15-10.200.20.14:22-10.200.16.10:47736.service: Deactivated successfully. May 27 17:06:50.509221 systemd-logind[1849]: Session 18 logged out. Waiting for processes to exit. May 27 17:06:50.512546 systemd[1]: session-18.scope: Deactivated successfully. May 27 17:06:50.514061 systemd-logind[1849]: Removed session 18. May 27 17:06:50.584551 systemd[1]: Started sshd@16-10.200.20.14:22-10.200.16.10:47750.service - OpenSSH per-connection server daemon (10.200.16.10:47750). May 27 17:06:51.048490 sshd[6117]: Accepted publickey for core from 10.200.16.10 port 47750 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:51.050038 sshd-session[6117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:51.055067 systemd-logind[1849]: New session 19 of user core. May 27 17:06:51.060247 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 17:06:52.282304 sshd[6119]: Connection closed by 10.200.16.10 port 47750 May 27 17:06:52.282747 sshd-session[6117]: pam_unix(sshd:session): session closed for user core May 27 17:06:52.287249 systemd[1]: sshd@16-10.200.20.14:22-10.200.16.10:47750.service: Deactivated successfully. May 27 17:06:52.289942 systemd[1]: session-19.scope: Deactivated successfully. May 27 17:06:52.291440 systemd-logind[1849]: Session 19 logged out. Waiting for processes to exit. May 27 17:06:52.293530 systemd-logind[1849]: Removed session 19. May 27 17:06:52.368726 systemd[1]: Started sshd@17-10.200.20.14:22-10.200.16.10:47760.service - OpenSSH per-connection server daemon (10.200.16.10:47760). May 27 17:06:52.828645 sshd[6139]: Accepted publickey for core from 10.200.16.10 port 47760 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:52.831336 sshd-session[6139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:52.841227 systemd-logind[1849]: New session 20 of user core. May 27 17:06:52.846317 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 17:06:53.404233 sshd[6141]: Connection closed by 10.200.16.10 port 47760 May 27 17:06:53.404512 sshd-session[6139]: pam_unix(sshd:session): session closed for user core May 27 17:06:53.408177 systemd-logind[1849]: Session 20 logged out. Waiting for processes to exit. May 27 17:06:53.409974 systemd[1]: sshd@17-10.200.20.14:22-10.200.16.10:47760.service: Deactivated successfully. May 27 17:06:53.412337 systemd[1]: session-20.scope: Deactivated successfully. May 27 17:06:53.418767 systemd-logind[1849]: Removed session 20. May 27 17:06:53.484763 systemd[1]: Started sshd@18-10.200.20.14:22-10.200.16.10:47764.service - OpenSSH per-connection server daemon (10.200.16.10:47764). May 27 17:06:53.939666 sshd[6152]: Accepted publickey for core from 10.200.16.10 port 47764 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:53.941087 sshd-session[6152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:53.945660 systemd-logind[1849]: New session 21 of user core. May 27 17:06:53.950170 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 17:06:54.326152 sshd[6154]: Connection closed by 10.200.16.10 port 47764 May 27 17:06:54.326864 sshd-session[6152]: pam_unix(sshd:session): session closed for user core May 27 17:06:54.331076 systemd[1]: sshd@18-10.200.20.14:22-10.200.16.10:47764.service: Deactivated successfully. May 27 17:06:54.331078 systemd-logind[1849]: Session 21 logged out. Waiting for processes to exit. May 27 17:06:54.333531 systemd[1]: session-21.scope: Deactivated successfully. May 27 17:06:54.335189 systemd-logind[1849]: Removed session 21. May 27 17:06:55.163178 kubelet[3382]: E0527 17:06:55.163062 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f77999cb5-qpzlk" podUID="2501c128-3fe4-46d7-bc55-201cf393d5e1" May 27 17:06:55.438884 containerd[1871]: time="2025-05-27T17:06:55.438563915Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99c2aa99d4f453515a2b01741e996626902ce40a1e8157419276abbe06d741c9\" id:\"3a1fc574db7d23ecc984cc3117e4a817b1f64c9f947b64258dde2d4cbe80da7b\" pid:6177 exited_at:{seconds:1748365615 nanos:438044523}" May 27 17:06:59.412276 systemd[1]: Started sshd@19-10.200.20.14:22-10.200.16.10:56762.service - OpenSSH per-connection server daemon (10.200.16.10:56762). May 27 17:06:59.865693 sshd[6190]: Accepted publickey for core from 10.200.16.10 port 56762 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:06:59.867115 sshd-session[6190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:06:59.871430 systemd-logind[1849]: New session 22 of user core. May 27 17:06:59.877318 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 17:07:00.259525 sshd[6192]: Connection closed by 10.200.16.10 port 56762 May 27 17:07:00.259275 sshd-session[6190]: pam_unix(sshd:session): session closed for user core May 27 17:07:00.264029 systemd-logind[1849]: Session 22 logged out. Waiting for processes to exit. May 27 17:07:00.264114 systemd[1]: sshd@19-10.200.20.14:22-10.200.16.10:56762.service: Deactivated successfully. May 27 17:07:00.267847 systemd[1]: session-22.scope: Deactivated successfully. May 27 17:07:00.269617 systemd-logind[1849]: Removed session 22. May 27 17:07:02.160725 kubelet[3382]: E0527 17:07:02.160509 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-2hnhp" podUID="1ed3902b-f80a-4d5d-a3a8-c425965d5219" May 27 17:07:05.346730 systemd[1]: Started sshd@20-10.200.20.14:22-10.200.16.10:56768.service - OpenSSH per-connection server daemon (10.200.16.10:56768). May 27 17:07:05.835898 sshd[6203]: Accepted publickey for core from 10.200.16.10 port 56768 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:07:05.837276 sshd-session[6203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:07:05.842248 systemd-logind[1849]: New session 23 of user core. May 27 17:07:05.847263 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 17:07:06.236638 sshd[6205]: Connection closed by 10.200.16.10 port 56768 May 27 17:07:06.238085 sshd-session[6203]: pam_unix(sshd:session): session closed for user core May 27 17:07:06.241752 systemd[1]: sshd@20-10.200.20.14:22-10.200.16.10:56768.service: Deactivated successfully. May 27 17:07:06.243410 systemd[1]: session-23.scope: Deactivated successfully. May 27 17:07:06.244155 systemd-logind[1849]: Session 23 logged out. Waiting for processes to exit. May 27 17:07:06.245786 systemd-logind[1849]: Removed session 23. May 27 17:07:06.414410 containerd[1871]: time="2025-05-27T17:07:06.414336212Z" level=info msg="TaskExit event in podsandbox handler container_id:\"319ebaf0c1336151458e2dd1743b02a13cff1f6ebf5da73c6b194e0546c2693f\" id:\"8e21434b3b1cb9ab89aba6943269a6749a96bb73731c8f65e7c89a1d57ba8a7a\" pid:6228 exited_at:{seconds:1748365626 nanos:413882116}" May 27 17:07:08.161328 kubelet[3382]: E0527 17:07:08.161269 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f77999cb5-qpzlk" podUID="2501c128-3fe4-46d7-bc55-201cf393d5e1" May 27 17:07:11.327189 systemd[1]: Started sshd@21-10.200.20.14:22-10.200.16.10:48130.service - OpenSSH per-connection server daemon (10.200.16.10:48130). May 27 17:07:11.817836 sshd[6241]: Accepted publickey for core from 10.200.16.10 port 48130 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:07:11.819251 sshd-session[6241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:07:11.827703 systemd-logind[1849]: New session 24 of user core. May 27 17:07:11.832188 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 17:07:12.212635 sshd[6243]: Connection closed by 10.200.16.10 port 48130 May 27 17:07:12.212119 sshd-session[6241]: pam_unix(sshd:session): session closed for user core May 27 17:07:12.215478 systemd[1]: sshd@21-10.200.20.14:22-10.200.16.10:48130.service: Deactivated successfully. May 27 17:07:12.217365 systemd[1]: session-24.scope: Deactivated successfully. May 27 17:07:12.218045 systemd-logind[1849]: Session 24 logged out. Waiting for processes to exit. May 27 17:07:12.220870 systemd-logind[1849]: Removed session 24. May 27 17:07:15.160275 kubelet[3382]: E0527 17:07:15.160197 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-2hnhp" podUID="1ed3902b-f80a-4d5d-a3a8-c425965d5219" May 27 17:07:17.299237 systemd[1]: Started sshd@22-10.200.20.14:22-10.200.16.10:48136.service - OpenSSH per-connection server daemon (10.200.16.10:48136). May 27 17:07:17.757844 sshd[6254]: Accepted publickey for core from 10.200.16.10 port 48136 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:07:17.759290 sshd-session[6254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:07:17.766601 systemd-logind[1849]: New session 25 of user core. May 27 17:07:17.775231 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 17:07:18.135690 sshd[6256]: Connection closed by 10.200.16.10 port 48136 May 27 17:07:18.136334 sshd-session[6254]: pam_unix(sshd:session): session closed for user core May 27 17:07:18.140105 systemd[1]: sshd@22-10.200.20.14:22-10.200.16.10:48136.service: Deactivated successfully. May 27 17:07:18.142370 systemd[1]: session-25.scope: Deactivated successfully. May 27 17:07:18.143310 systemd-logind[1849]: Session 25 logged out. Waiting for processes to exit. May 27 17:07:18.144741 systemd-logind[1849]: Removed session 25. May 27 17:07:23.161491 kubelet[3382]: E0527 17:07:23.161440 3382 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f77999cb5-qpzlk" podUID="2501c128-3fe4-46d7-bc55-201cf393d5e1" May 27 17:07:23.226616 systemd[1]: Started sshd@23-10.200.20.14:22-10.200.16.10:37696.service - OpenSSH per-connection server daemon (10.200.16.10:37696). May 27 17:07:23.713888 sshd[6270]: Accepted publickey for core from 10.200.16.10 port 37696 ssh2: RSA SHA256:3fBULumcCrNESkjd3rLTu0+/iXnpcaud7Fw1tGwpIiY May 27 17:07:23.715248 sshd-session[6270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:07:23.719859 systemd-logind[1849]: New session 26 of user core. May 27 17:07:23.728196 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 17:07:24.110383 sshd[6272]: Connection closed by 10.200.16.10 port 37696 May 27 17:07:24.109692 sshd-session[6270]: pam_unix(sshd:session): session closed for user core May 27 17:07:24.114978 systemd[1]: sshd@23-10.200.20.14:22-10.200.16.10:37696.service: Deactivated successfully. May 27 17:07:24.117722 systemd[1]: session-26.scope: Deactivated successfully. May 27 17:07:24.119476 systemd-logind[1849]: Session 26 logged out. Waiting for processes to exit. May 27 17:07:24.121709 systemd-logind[1849]: Removed session 26. May 27 17:07:25.436515 containerd[1871]: time="2025-05-27T17:07:25.436467040Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99c2aa99d4f453515a2b01741e996626902ce40a1e8157419276abbe06d741c9\" id:\"d062942f59c5f6ff3c622d2fe87400dbeea92af175add2e2b8ef2562d5403f46\" pid:6295 exited_at:{seconds:1748365645 nanos:436156496}"